Use this endpoint to submit a model input and output pair to a monitor for evaluation
model_input dictionary (containing at least a system_prompt field or a user_prompt field), a model_output string to be evaluated, and a guardrail_metrics array specifying which metrics to evaluate against. Optionally, include the model_used, selected run_mode, and a human-readable nametag. Including the web_search, file_search, and context_awareness fields will allow the evaluation model to use those extended AI capabilities.precision_plus - Maximum accuracy using the most advanced modelsprecision - High accuracy with optimized performancesmart - Balanced speed and accuracy (default)economy - Fastest evaluation at lowest costBearer authentication header of the form Bearer <token>, where <token> is your auth token.
The ID of the monitor associated with this event.
A dictionary of inputs sent to the LLM to generate output. The dictionary must contain at least a user_prompt field or a system_prompt field. For ground_truth_adherence guardrail metric, ground_truth should be provided.
Output generated by the LLM to be evaluated.
Run mode for the monitor event. The run mode allows the user to optimize for speed, accuracy, and cost by determining which models are used to evaluate the event. Available run modes include precision_plus, precision, smart, and economy. Defaults to smart.
precision_plus, precision, smart, economy An optional, user-defined tag for the event.
Monitor event created successfully
A unique monitor event ID.
"evt_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Monitor ID associated with this event.
"mon_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
The time the monitor event was created in UTC.
"2025-01-15T10:30:00Z"