Playground

The Playground allows you to test how your active rules behave against real prompts and responses before or during production use. It simulates the runtime protection, showing you in real time how policy rules are flagging on both inputs and outputs.

Figure 1: AI Runtime Protection Playground

Details

At the top of the Playground view you will see:

  • AI Runtime Policy Active - toggle to enable or disable the current AI Runtime Policy during testing.

  • Input Rules - lists the rules applied to user input (e.g. Jailbreak, Off Topic).

  • Output Rules - lists the rules applied to model responses (e.g. Off Topic).

Provide Your System Prompt

Enter the system prompt for your AI system in this section. This helps you replicate the real environment of your application so you can test the AI Runtime Policy under realistic conditions.

Conversation Testing

On the right-hand side, you can enter a user message and see how both the input and the system’s output are evaluated.

The results panel shows:

  • All rules and topics that were applied to the message.

  • Flagged results - highlighted in red when a message violates a policy.

  • Threshold vs. Similarity - the detection confidence compared to the configured threshold for each rule.

  • Topic-level detection - for Off Topic rules, each configured topic (e.g. Crypto Investment, Political Discussion) is listed with its result and similarity score.

circle-info
  • The Playground is for testing and validation, it does not connect to your production LLM or agent.

  • Use it to iterate on thresholds, canary words, and other configurations before deploying AI Runtime Protection into production.

  • Messages in the Playground are only visible in this testing view, they do not appear in the Messages tab for runtime traffic.

Last updated

Was this helpful?