Practical Examples

This page provides a structured overview and practical guide to working with the Platform API. It is intended for developers, data scientists, and technical professionals who need to integrate, test, and analyze data using SPLX Platform.

The document consolidates explanations, implementation details, and executable examples into a single reference, enabling both quick onboarding and deeper exploration of the API.

Objectives

  • Introduce the core concepts and capabilities of the Platform API

  • Provide clear examples of request and response patterns

  • Demonstrate typical workflows and integration strategies

  • Offer reproducible code samples for practical experimentation

Imports

import requests
import json

Setting variables

# Note, these variables need to be set according to your URL otherwise the API calls will not work.

# Example URL: 
# https://probe.splx.ai/w/290/target/91/test-runs/862/probe/1815?tab=results

url = "https://api.probe.splx.ai" # URL for the EU deployment; us.api.probe.splx.ai for US
workspace_id = "290"
target_id = "91"
test_run_id = "862"
probe_run_id = "1815"

Example: Export probe run test cases

Endpoints for Replicating PDF Reports

The following endpoints are used to export the findings shown in PDF reports downloadable in the Platform:

  • Get test run status

  • Retrieve probe run execution data and analysis results

  • Retrieve overall scores and category breakdown for a target

The combined results of these three calls provide the status, detailed findings, and summary metrics required to replicate the SPLX Platform PDF reports.

Endpoints Used for Probe Settings

Get probe settings for a target

Endpoints Used for Remediation Tasks & Guardrail Policy Suggestions

Remediation tasks & guardrail policy suggestions for specific probe

Example Use Case 1: Get all FAILED Test Cases for a Given Organization

1. Get all workspaces

2. Retrieve all test runs for every Target ID within a workspace

3. For each test run, retrieve all associated Probe Run IDs

4. For each probe run, retrieve all failed test cases and consolidate them into a single dataset

Note, this could take some time depending on the number of failed test cases.

Example Use Case 2: Retrieving FAILED Test Cases for a Specific Benchmark Model and Probe Run

1. Retrieve all benchmark models to view their IDs and details

Benchmarks include different types. To retrieve specific test conversations, you must first select the benchmark type for which you want to obtain results.

2. Retrieve specific benchmark test cases

Example: Retrieve data for OpenAI 4o without a system prompt

3. Retrieve data for a specific probe run

Example: Get the Context Leakage probe run

4. Retrieve all failed test cases for this probe run

Last updated