Skip to main content
POST
/
defend
/
{workflow_id}
/
events
Submit a workflow event
from deeprails import Deeprails

DEEPRAILS_API_KEY = "YOUR_API_KEY"

client = Deeprails(
    api_key=DEEPRAILS_API_KEY,
)

event_response = client.defend.submit_event(
    workflow_id="defend_abc123",
    model_input={
            "user_prompt": "Hello, how are you?",
    },
    model_output="I am good, thank you!",
    model_used="gpt-4o-mini",
    run_mode="smart",
    nametag="test",
)
print(event_response.event_id)
{
  "event_id": "<string>",
  "workflow_id": "<string>",
  "filtered": true,
  "evaluation_id": "<string>",
  "attempt_number": 123
}
The request body must include a model_input dictionary (containing atleast a system_prompt or user_prompt field), a model_output string to be evaluated, the model_used to generated the output (Ex. gpt-5-mini), the run_mode to select speed/accuracy/cost for evaluation, and a nametag for the workflow event.

The run mode determines which models power the evaluation:
- precision_plus - Maximum accuracy using the most advanced models
- precision - High accuracy with optimized performance
- smart - Balanced speed and accuracy (default)
- economy - Fastest evaluation at lowest cost

The event will be run with the guardrail metrics and improvement steps configured in its associated workflow.

When you create a workflow event, you’ll receive an event ID. Use this ID to track the event’s progress and retrieve all evaluations and improvement result.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

workflow_id
string
required

Workflow ID associated with this event.

Body

application/json
model_input
object
required

A dictionary of inputs sent to the LLM to generate output. The dictionary must contain atleast user_prompt or system_prompt field. For ground_truth_aherence guadrail metric, ground_truth should be provided.

  • Option 1
  • Option 2
model_output
string
required

Output generated by the LLM to be evaluated.

model_used
string
required

Model ID used to generate the output, like gpt-4o or o3.

run_mode
enum<string>
required

Run mode for the workflow event. The run mode allows the user to optimize for speed, accuracy, and cost by determining which models are used to evaluate the event. Available run modes include precision_plus, precision, smart, and economy. Defaults to smart.

Available options:
precision_plus,
precision,
smart,
economy
nametag
string

An optional, user-defined tag for the event.

Response

Workflow event created successfully

event_id
string
required

A unique workflow event ID.

workflow_id
string
required

Workflow ID associated with the event.

filtered
boolean

False if evaluation passed all of the guardrail metrics, True if evaluation failed any of the guardrail metrics.

evaluation_id
string

A unique evaluation ID associated with this event. Every event has one or more evaluation attempts.

attempt_number
integer

Count of improvement attempts for the event. If greater than one then all previous improvement attempts failed.

I