Connect Guardrail Service

The guardrail service is the runtime enforcement layer that evaluates your AI system’s input and output messages in real time. Each request must include the Guardrail ID, which you receive after creating a guardrail in the SPLX Platform. This ID tells the service which policy configuration to enforce, and the service responds with whether the message is safe or flagged.

Basic Integration

The following snippet shows how to call the guardrail service. You can add this wherever it best fits your application logic, for example before sending user input to the model or after receiving the model’s output:

async with aio.secure_channel(
    "<<Guardrail URL>>", credentials=grpc.ssl_channel_credentials()
) as channel:
    stub = pb2_grpc.GuardrailServiceStub(channel)

    content = "<<Message Content>>"
  
    resp: pb2.GuardResponse = await stub.Guard(
        pb2.GuardRequest(
            configuration_name="<<GuardrailID>>",
            message=pb2.Message(role="<<Message Role>>", content=content),
        ),
        metadata=(("x-api-key", "<<PAT>>"),)
    )
    print(resp)

Placeholders

Replace the placeholders with your own values:

  • Guardrail URL - endpoint of your guardrail service.

  • Message Content - the actual text to evaluate (user input or AI output).

  • GuardrailID - the ID of the guardrail you created in the SPLX Platform. This ensures the correct policy configuration is enforced.

  • Message Role - set to "user" for input messages or "assistant" for output messages.

Input and Output Example

Below is a typical chatbot flow, showing how to check both user input and AI output against the guardrail service:

# --- Input message check ---
user_message = get_user_message()

async with aio.secure_channel(
    "<<Guardrail URL>>", credentials=grpc.ssl_channel_credentials()
) as channel:
    stub = pb2_grpc.GuardrailServiceStub(channel)
  
    resp: pb2.GuardResponse = await stub.Guard(
        pb2.GuardRequest(
            configuration_name="<<GuardrailID>>",
            message=pb2.Message(role="user", content=user_message),
        ),
        metadata=(("x-api-key", "<<PAT>>"),),
    )

    if resp.failed:  # If guardrail flagged the message on input
        block_message(user_message)

# --- Output message check ---
llm_response = get_llm_response(user_message)

async with aio.secure_channel(
    "<<Guardrail URL>>", credentials=grpc.ssl_channel_credentials()
) as channel:
    stub = pb2_grpc.GuardrailServiceStub(channel)
  
    resp: pb2.GuardResponse = await stub.Guard(
        pb2.GuardRequest(
            configuration_name="<<GuardrailID>>",
            message=pb2.Message(role="assistant", content=llm_response),
        ),
        metadata=(("x-api-key", "<<PAT>>"),),
    )

    if resp.failed:  # If guardrail flagged the message on output
        block_message(llm_response)

Guardrail Service uses the guards Built with Llama.

Last updated