Use this endpoint to create a new monitor to evaluate model inputs and outputs using guardrails
name and a set of guardrail_metrics used for evaluation. Optionally, extended capabilities, like web_search, file_search, and context_awareness, and a description can be included.Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Name of the new monitor.
An array of guardrail metrics that the model input and output pair will be evaluated on. For non-enterprise users, these will be limited to correctness, completeness, instruction_adherence, context_adherence, ground_truth_adherence, and/or comprehensive_safety.
correctness, completeness, instruction_adherence, context_adherence, ground_truth_adherence, comprehensive_safety Description of the new monitor.
Whether to enable web search for this monitor's evaluations. Defaults to false.
An array of file IDs to search in the monitor's evaluations. Files must be uploaded via the DeepRails API first.
A file ID corresponding to a file to search in the monitor's evaluations.
Context includes any structured information that directly relates to the model’s input and expected output—e.g., the recent turn-by-turn history between an AI tutor and a student, facts or state passed through an agentic workflow, or other domain-specific signals your system already knows and wants the model to condition on. This field determines whether to enable context awareness for this monitor's evaluations. Defaults to false.
Monitor created successfully
A unique monitor ID.
"mon_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Status of the monitor. Can be active or inactive. Inactive monitors no longer record and evaluate events.
active, inactive "active"
The time the monitor was created in UTC.
"2025-01-15T10:30:00Z"