Skip to main content
POST
/
monitor
/
{monitor_id}
/
events
Submit a monitor event
from deeprails import Deeprails

DEEPRAILS_API_KEY = "YOUR_API_KEY"

client = Deeprails(
    api_key=DEEPRAILS_API_KEY,
)

monitor_response = client.monitor.submit_event(
    monitor_id="mon_8771f5a86648",
    model_input={
        "user_prompt": "What is the capital of France?"
    },
    model_output="Paris",
    model_used="gpt-4o-mini",
    guardrail_metrics=["correctness", "completeness", "comprehensive_safety"],
    run_mode="economy"
)
print(monitor_response.data.event_id)
print(monitor_response.data.evaluation_id)
{
  "success": true,
  "data": {
    "event_id": "<string>",
    "monitor_id": "<string>",
    "evaluation_id": "<string>",
    "created_at": "2023-11-07T05:31:56Z"
  },
  "message": "<string>"
}
The request body must include a model_input dictionary (containing at least a system_prompt or user_prompt field), a model_output string to be evaluated, and a guardrail_metrics array specifying which metrics to evaluate against. Optionally, include the model_used, selected run_mode, and a human-readable nametag.

Run modes determine the models that power evaluations
- precision_plus - Maximum accuracy using the most advanced models
- precision - High accuracy with optimized performance
- smart - Balanced speed and accuracy (default)
- economy - Fastest evaluation at lowest cost

Available guardrail metrics include correctness, completeness, instruction_adherence, context_adherence, ground_truth_adherence, and comprehensive_safety.

When you create a monitor event, you’ll receive an event ID. Use this ID to track the event’s progress and retrieve the evaluation results.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

monitor_id
string
required

The ID of the monitor associated with this event.

Body

application/json
model_input
object
required

A dictionary of inputs sent to the LLM to generate output. The dictionary must contain at least a user_prompt or system_prompt field. For ground_truth_aherence guadrail metric, ground_truth should be provided.

  • Option 1
  • Option 2
model_output
string
required

Output generated by the LLM to be evaluated.

guardrail_metrics
enum<string>[]
required

An array of guardrail metrics that the model input and output pair will be evaluated on. For non-enterprise users, these will be limited to correctness, completeness, instruction_adherence, context_adherence, ground_truth_adherence, and/or comprehensive_safety.

model_used
string

Model ID used to generate the output, like gpt-4o or o3.

run_mode
enum<string>

Run mode for the monitor event. The run mode allows the user to optimize for speed, accuracy, and cost by determining which models are used to evaluate the event. Available run modes include precision_plus, precision, smart, and economy. Defaults to smart.

Available options:
precision_plus,
precision,
smart,
economy
nametag
string

An optional, user-defined tag for the event.

Response

Monitor event created successfully

Response wrapper for operations returning a MonitorEventResponse.

success
boolean
required

Represents whether the request was completed successfully.

data
object
message
string

The accompanying message for the request. Includes error details when applicable.

I