Skip to main content
POST
/
monitor
/
{monitor_id}
/
events
Submit a Monitor Event
from deeprails import DeepRails

DEEPRAILS_API_KEY = "YOUR_API_KEY"

client = DeepRails(
    api_key=DEEPRAILS_API_KEY,
)

monitor_response = client.monitor.submit_event(
    monitor_id="mon_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
    model_input={
        "user_prompt": "What is the capital of France?"
    },
    model_output="Paris",
    run_mode="economy"
)
print(monitor_response)
{
  "event_id": "evt_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  "monitor_id": "mon_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  "created_at": "2025-01-15T10:30:00Z"
}
Monitors events represent individual LLM usages in your production use case. Whenever you receive a new LLM response, submit an event to this endpoint to have it evaluated.

The request body must include a model_input dictionary (containing at least a system_prompt field or a user_prompt field), a model_output string to be evaluated, and a guardrail_metrics array specifying which metrics to evaluate against. Optionally, include the model_used, selected run_mode, and a human-readable nametag. Including the web_search, file_search, and context_awareness fields will allow the evaluation model to use those extended AI capabilities.

Run modes determine the models that power evaluations
- precision_plus - Maximum accuracy using the most advanced models
- precision - High accuracy with optimized performance
- smart - Balanced speed and accuracy (default)
- economy - Fastest evaluation at lowest cost

When you create a monitor event, you’ll receive an event ID. Use this ID to track the event’s progress and retrieve the evaluation results.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

monitor_id
string
required

The ID of the monitor associated with this event.

Body

application/json
model_input
object
required

A dictionary of inputs sent to the LLM to generate output. The dictionary must contain at least a user_prompt field or a system_prompt field. For ground_truth_adherence guardrail metric, ground_truth should be provided.

model_output
string
required

Output generated by the LLM to be evaluated.

run_mode
enum<string>

Run mode for the monitor event. The run mode allows the user to optimize for speed, accuracy, and cost by determining which models are used to evaluate the event. Available run modes include precision_plus, precision, smart, and economy. Defaults to smart.

Available options:
precision_plus,
precision,
smart,
economy
nametag
string

An optional, user-defined tag for the event.

Response

Monitor event created successfully

event_id
string
required

A unique monitor event ID.

Example:

"evt_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

monitor_id
string
required

Monitor ID associated with this event.

Example:

"mon_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

created_at
string<date-time>

The time the monitor event was created in UTC.

Example:

"2025-01-15T10:30:00Z"