Connect AI Runtime Protection Service
The AI Runtime Protection service is the runtime enforcement layer that evaluates your AI system’s input and output messages in real time. Each request must include the AI Runtime Policy ID, which you receive after creating an AI Runtime Policy in the SPLX Platform. This ID tells the service which policy configuration to enforce, and the service responds with whether the message is safe or flagged.
Basic Integration
The following snippet shows how to call the AI Runtime Protection service. You can add this wherever it best fits your application logic, for example before sending user input to the model or after receiving the model’s output:
async with aio.secure_channel(
"<<Guardrail URL>>", credentials=grpc.ssl_channel_credentials()
) as channel:
stub = pb2_grpc.GuardrailServiceStub(channel)
content = "<<Message Content>>"
resp: pb2.GuardResponse = await stub.Guard(
pb2.GuardRequest(
configuration_name="<<GuardrailID>>",
message=pb2.Message(role="<<Message Role>>", content=content),
),
metadata=(("x-api-key", "<<PAT>>"),)
)
print(resp)Placeholders
Replace the placeholders with your own values:
Guardrail URL - endpoint of your AI Runtime Protection service.
Message Content - the actual text to evaluate (user input or AI output).
GuardrailID - the ID of the AI Runtime Policy you created in the SPLX Platform. This ensures the correct policy configuration is enforced.
Message Role - set to "user" for input messages or "assistant" for output messages.
Input and Output Example
Below is a typical chatbot flow, showing how to check both user input and AI output against the AI Runtime Protection service:
AI Runtime Protection service uses the rules Built with Llama.
Last updated
Was this helpful?