Skip to main content

Create an API Key

  1. In your organization’s DeepRails API Console, go to API Keys.
  2. Click Create key, name it, then copy the key.
  3. (Optional) Save it as the DEEPRAILS_API_KEY environment variable.
API Keys – placeholder

Create and manage API keys in the API Console.

Install the SDK

  • Python
  • TypeScript / Node
  • Ruby
  • Go
pip install deeprails

Create a Monitor

Before you can send events, you need to create a monitor. A monitor is a container for tracking production events and their evaluations.
Tip: You can also create a monitor via the DeepRails API Console.
  • Python
  • TypeScript / Node
  • Ruby
  • Go
from deeprails import DeepRails

# Initialize (env var DEEPRAILS_API_KEY is recommended)
client = DeepRails(token="YOUR_API_KEY")

try:
    # Create a monitor
    monitor = client.create_monitor(
        name="Production Chat Assistant Monitor",
        description="Monitoring our production chatbot responses"
    )
    
    print(f"Monitor created with ID: {monitor.monitor_id}")
except Exception as e:
    print(f"Error: {e}")

Send Your First Monitor Event

Use the SDK to log a production event (input + output). The SDK automatically triggers an evaluation using the guardrail metrics you pass and links the result to the event.
  • Python
  • TypeScript / Node
  • Ruby
  • Go
from deeprails import DeepRails

# Initialize (env var DEEPRAILS_API_KEY is recommended)
client = DeepRails(token="YOUR_API_KEY")

# Create a monitor event (get the monitor_id from Console → Monitors)
created = client.create_monitor_event(
    monitor_id="mon-xxxxxxxx",
    model_input={"user_prompt": "Summarize the Paris Agreement in 3 bullets."},
    model_output="• International treaty on climate change...",
    model_used="gpt-4o-mini",
    guardrail_metrics=[
        "correctness",
        "completeness",
        "instruction_adherence",
        "comprehensive_safety",
    ],
)

print("Event ID:", created.event_id)
print("Linked evaluation:", created.evaluation_id)

# Retrieve and read evaluation results (auto-run from the monitor event)
fetched = client.get_evaluation(created.evaluation_id)
if fetched.evaluation_result:
    for metric, info in fetched.evaluation_result.items():
        print(metric, info.get("score", "N/A"))

Required Parameters

FieldTypeDescription
monitor_idstringThe ID of the monitor to receive the event (find it in Console → Monitor → Manage Monitors).
model_inputobjectMust include atleast system_prompt or user_prompt.
model_outputstringThe LLM output to be evaluated and recorded with the event.

Optional Parameters

FieldTypeDescription
model_usedstringThe model that produced model_output (e.g., gpt-4o-mini).
guardrail_metricsstring[]Metrics to score (e.g., correctness, completeness, instruction_adherence, context_adherence, ground_truth_adherence, comprehensive_safety).

Retrieve Monitor Data

You can retrieve monitor details including stats on evaluation progress.
  • Python
  • TypeScript / Node
  • Ruby
  • Go
from deeprails import DeepRails

client = DeepRails(token="YOUR_API_KEY")

try:
    # Get monitor details
    monitor = client.get_monitor("mon_xxxxxxxx")
    print(f"Monitor name: {monitor.name}")
    print(f"Status: {monitor.monitor_status}")
except Exception as e:
    print(f"Error: {e}")

Check Monitor Analytics via the API Console

  1. Open DeepRails API Console → Monitor → Data.
  2. Filter by model, time range, or search by monitor_id to find events.
  3. Open any event to see the linked evaluation scores and rationales.
Monitor data – placeholder

Browse real-time monitor events, filters, and linked evaluation details.

Next Steps

I