# Probe Run View

The **Probe Run View** page displays and visualizes the test data of the selected probe's run within your executed test run (Figure 1.). This view presents the results of all executed test cases, including details such as **Attack Strategies**, **Variations** and **Messages** that the probe exchanged with the Target.

{% hint style="info" %}
To understand terms such as S**trategy, Variation and Red Teamer** check the [**Test Case Parametrization**](https://docs.probe.splx.ai/ai-red-teaming/probe/probe-run/test-case-parametrization) page.
{% endhint %}

From this page, you can:

* Generate an [**AI Analysis**](https://docs.probe.splx.ai/ai-red-teaming/probe/probe-run/analyze-with-ai) of a Probe run
* [**Track an Issue**](https://docs.probe.splx.ai/ai-red-teaming/probe/probe-run/tracking-an-issue)

You can access the Probe Run View section by clicking on a [**Probe Card**](https://docs.probe.splx.ai/ai-red-teaming/overview-page#categories-overview) on Overview page or by clicking on a relevant row in the [**Probe Run Table**](https://docs.probe.splx.ai/ai-red-teaming/test-run/test-run-view#probes-table) on Test Run page.

At the top of the page, you will find basic information about the probe's run, including the total number of test cases, the number of failed and error test cases, the execution date and time, the run's status, and a progress bar.

<figure><img src="https://1029475228-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fi12bk7lo75SODuwcRCQp%2Fuploads%2Fyv8cfCdOA0CAIBOLR2dM%2F2_test_runs_contextLeakage.png?alt=media&#x26;token=cef55a62-36fc-401a-9b23-1cbbad821fe5" alt=""><figcaption><p>Figure 1: Probe Run View for Context Leakage</p></figcaption></figure>

## Probe Run Results

### Sankey Diagram

The **Sankey diagram** (Figure 1) visually depicts the **connections among strategy, red teamer, variation, and the outcomes of test cases**. The width of the flow between nodes in the diagram corresponds to the number of test cases, while the color coding represents the percentage of failed test cases out of the total executed.

### Probe Result Table

On the bottom of the [**Probe Run View**](https://docs.probe.splx.ai/ai-red-teaming/probe/probe-run/probe-run-view) page, **Probe Result Table** displays all probe's **Test Cases** executed against your target.

The table contains:

* **Id**: Unique identifier of the test case.
* **Attack:** Unique identifier of the attack.
* **Strategy, Red Teamer, Variation**: Explained in [**Test Case Parametrization**](https://docs.probe.splx.ai/ai-red-teaming/probe/probe-run/test-case-parametrization) page.
* **Detection Time:** The timestamp of the moment when the automated decision was made on whether the test case passed or failed.
* **In Report:** Indicates whether the test case has been flagged for inclusion in the report.
* **Actions Icon:** Indicates the actions taken on the test case (accepting risk or changing the result status).
* **Result**: Outcome of the executed test case.
  * **Passed**: The test hasn't detected the vulnerability on your application.
  * **Failed**: The attack found the targeted vulnerability.
  * **Error**: An error occurred while communicating with the target.

Both filtering and global search functionalities are available to customize the view according to your specific preferences.

The table (maintaining filter applied) can be **exported in CSV and JSON** formats, allowing for easy integration with other tools and systems for further analysis or reporting. Option to export CSV and JSON **with review tag** include the comments left on the test case.

To view detailed **interactions** between the probe and the target for each test case, as well as an **explanation of the test case result**, click on the corresponding row. Refer to the [**Test Case Details**](https://docs.probe.splx.ai/ai-red-teaming/probe/probe-run/test-case-details) page for further explanation.

<figure><img src="https://1029475228-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fi12bk7lo75SODuwcRCQp%2Fuploads%2Fvg88TybtNNiwHcSzYoFZ%2Fprobe.splx.ai_w_31_target_15%20(1).png?alt=media&#x26;token=c7cb60dd-53bf-4411-81d2-4929a9474bf1" alt=""><figcaption><p>Figure 2: Probe Result Table</p></figcaption></figure>

### Rerun Probe (Continue Probe Run)

A **Rerun** action is available for **Probe Runs**. It is intended to support recovery when issues occur during scanning (e.g., interruptions or errors), so that a run does not need to be started from the beginning. When a rerun is initiated, execution is continued **from the point where the run stopped**, and attacks that previously ended in an error can be **retried**.

Re-running a probe uses the **latest Target configuration** and **current rate limit** settings. This avoids continuing a run with stale settings captured during the initial start of the probe.

{% hint style="warning" %}

* **Reruns do not consume additional credits.**
* **Results may vary if the probe configuration or probe version has changed since the original run.**
  {% endhint %}

<figure><img src="https://1029475228-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fi12bk7lo75SODuwcRCQp%2Fuploads%2FVTwi03O8OnrBwvgeZLEU%2Fprobe.dev.splx.ai_w_213_target_787_test-runs_7171_probe_15824_tab%3Dresults.png?alt=media&#x26;token=05a42f4c-d487-4df0-9445-31d8f04e7a1f" alt=""><figcaption><p>Figure 3: Rerun Probe / Continue Probe Run</p></figcaption></figure>

#### Expected differences in attacks on rerun

During a rerun, the exact set of attacks should not be assumed to be identical to the original run:

* If the **Probe version has been updated**, different attacks, strategies, and variations may be produced, and execution may differ significantly from the original run.
* Because attacks are **dynamically generated**, identical attacks are not guaranteed even when the **same Probe version** is used.
* If the **Probe configuration has changed**, the updated configuration is applied **only to newly generated / not-yet-executed attacks**. Attacks that have already been executed are not retroactively modified.
