Skip to main content
POST
/
defend
/
{workflow_id}
/
events
Submit a Workflow Event
from deeprails import DeepRails

DEEPRAILS_API_KEY = "YOUR_API_KEY"

client = DeepRails(
    api_key=DEEPRAILS_API_KEY,
)

event_response = client.defend.submit_event(
    workflow_id="wkfl_xxxxxxxxxxxx",
    model_input={
      "system_prompt": "You are a helpful tutor specializing in AP science classes.",
      "user_prompt": "Explain the difference between mitosis and meiosis in one sentence.",
      "context": [{"role": "user", "content": "I have an AP Bio exam tomorrow, can you help me study?"}, {"role": "tutor", "content": "Sure, I'll help you study."}]
    },
    model_output="Mitosis produces two genetically identical diploid cells for growth and tissue repair, whereas meiosis generates four genetically varied haploid gametes for sexual reproduction.",
    model_used="gpt-4o-mini",
    run_mode="precision",
    nametag="test",
)
print(event_response)
{
  "event_id": "evt_xxxxxxxxxxxx",
  "workflow_id": "wkfl_xxxxxxxxxxxx",
  "created_at": "2025-11-10T01:32:44.591Z",
  "status": "In Progress",
  "billing_request_id": "bill_0123456789abcdefabcd"
}
Workflow events represent individual LLM calls in your production use case. Whenever you receive a new LLM response, you can submit it, along with the input, to the workflow for evaluation and remediation. The model_input field in the request must be adictionary (containing at least a system_prompt field or a user_prompt field), and the request must also includethe model_used to generate the output (Ex. gpt-5-mini), the selected run_mode that determines speed/accuracy/cost for evaluation, and optionally a nametag for the workflow event.

The run mode determines which models power the evaluation:
- precision_max_codex - Ultimate accuracy with Codex-optimized deep analysis
- precision_max - Maximum accuracy and detail
- precision_codex - High accuracy with code-optimized analysis
- precision - High accuracy analysis (default)
- fast - Maximum speed for high-volume processing

The event’s evaluations will be run with the guardrail metrics and improvement action configured in its associated workflow.

When you create a workflow event, you’ll receive an event ID. Use this ID to track the event’s progress and retrieve all evaluations and improvement results.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

workflow_id
string
required

Workflow ID associated with this event.

Body

application/json
model_input
object
required

A dictionary of inputs sent to the LLM to generate output. The dictionary must contain a user_prompt field. For the ground_truth_adherence guardrail metric, ground_truth should be provided.

model_output
string
required

Output generated by the LLM to be evaluated.

model_used
string
required

Model ID used to generate the output, like gpt-4o or o3.

run_mode
enum<string>
required

Run mode for the workflow event. The run mode allows the user to optimize for speed, accuracy, and cost by determining which models are used to evaluate the event. Available run modes include precision_max_codex, precision_max, precision_codex, precision, and fast. Defaults to precision.

Available options:
precision_max_codex,
precision_max,
precision_codex,
precision,
fast
nametag
string

An optional, user-defined tag for the event.

Response

Workflow event created successfully

event_id
string
required

A unique workflow event ID.

Example:

"evt_xxxxxxxxxxxx"

workflow_id
string
required

Workflow ID associated with the event.

Example:

"wkfl_xxxxxxxxxxxx"

created_at
string<date-time>
required

The time the event was created in UTC.

Example:

"2025-11-10T01:32:44.591Z"

status
enum<string>
required

Status of the event.

Available options:
In Progress,
Completed
Example:

"In Progress"

billing_request_id
string
required

The ID of the billing request for the event.

Example:

"bill_0123456789abcdefabcd"