# DeepRails ## Docs - [Create a Workflow](https://docs.deeprails.com/api-reference/defend/create-a-workflow.md): Use this endpoint to create a new guardrail workflow by specifying guardrail thresholds, an improvement action, and optional extended capabilities. - [Retrieve a Workflow's Details](https://docs.deeprails.com/api-reference/defend/retrieve-a-workflows-details.md): Use this endpoint to retrieve the details for a specific defend workflow - [Retrieve an Event's Details](https://docs.deeprails.com/api-reference/defend/retrieve-an-events-details.md): Use this endpoint to retrieve a specific event of a guardrail workflow - [Stream a Workflow Event (Optimized)](https://docs.deeprails.com/api-reference/defend/stream-a-workflow-event-optimized.md): Use this endpoint to submit a model input and output pair to a workflow for evaluation with streaming responses. - [Submit a Workflow Event](https://docs.deeprails.com/api-reference/defend/submit-a-workflow-event.md): Use this endpoint to submit a model input and output pair to a workflow for evaluation - [Update a Workflow](https://docs.deeprails.com/api-reference/defend/update-a-workflow.md): Use this endpoint to update an existing defend workflow if its details change. - [Upload a File](https://docs.deeprails.com/api-reference/files/upload-a-file.md): Use this endpoint to upload a file to the DeepRails API - [Create a Monitor](https://docs.deeprails.com/api-reference/monitor/create-a-monitor.md): Use this endpoint to create a new monitor to evaluate model inputs and outputs using guardrails - [Retrieve a Monitor Event's Details](https://docs.deeprails.com/api-reference/monitor/retrieve-a-monitor-events-details.md): Use this endpoint to retrieve the details of a specific monitor event - [Retrieve a Monitor's Details](https://docs.deeprails.com/api-reference/monitor/retrieve-a-monitors-details.md): Use this endpoint to retrieve the details and evaluations associated with a specific monitor - [Submit a Monitor Event](https://docs.deeprails.com/api-reference/monitor/submit-a-monitor-event.md): Use this endpoint to submit a model input and output pair to a monitor for evaluation - [Update a Monitor](https://docs.deeprails.com/api-reference/monitor/update-a-monitor.md): Use this endpoint to update the name, status, and/or other details of an existing monitor. - [Defend Details](https://docs.deeprails.com/defend/details.md): A deep dive into how Defend's improvement tools, adaptive thresholds, and retry logic work under the hood — so you can configure workflows with confidence. - [Defend Overview](https://docs.deeprails.com/defend/overview.md): Defend is the most powerful tool within DeepRails' API suite. It is the real-time correction layer that ensures every model output is hallucination-free before it ever reaches your users. By combining adaptive thresholds with automated improvement tools, Defend doesn't just detect low-quality or uns… - [Quickstart Guide](https://docs.deeprails.com/defend/quickstart.md): Get started with the Defend API in minutes. - [Hallucination Classification](https://docs.deeprails.com/engine/hallucination-classification.md): How DeepRails classifies and responds to hallucinations in Defend. - [Multimodal Partitioned Evaluation](https://docs.deeprails.com/engine/multimodal-partitioned-evaluation.md): Multimodal Partitioned Evaluation (formerly known as HyperChainpoll) is DeepRails’ evaluation engine. MPE combines several evaluation techniques to deliver accurate, audit-ready scores across all metrics—without exposing your prompts or users to single-model bias. - [Run Modes](https://docs.deeprails.com/engine/run-modes.md): DeepRails has six different run modes that let you balance cost, latency, and accuracy across all APIs. Choose the intelligence level of the models behind your evaluations to best fit your needs. - [Completeness](https://docs.deeprails.com/guardrails/completeness.md): LLM outputs can often fixate on subsets of complex prompts and can occasionally deviate from the user's intended topic. The Completeness metric evaluates whether a response completely answers a prompt without going off track. - [Comprehensive Safety](https://docs.deeprails.com/guardrails/comprehensive-safety.md): One of the largest concerns with using LLMs in automation is that dangerous content could be output uncensored and exposed to thousands or millions of people before it's caught. The Comprehensive Safety metric evaluates whether a response is completely devoid of potentially dangerous statements of a… - [Context Adherence](https://docs.deeprails.com/guardrails/context-adherence.md): One of the most powerful aspects of generative AI is models' abilities to adjust to a given context. As such, monitoring the model's adherence is the most critical metric for many use cases. Note, however, the evaluation will not work effectively if DeepRails does not detect enough context in the pr… - [Correctness](https://docs.deeprails.com/guardrails/correctness.md): The scariest AI hallucinations are when outputs contain made up or false information. The Correctness metric measures how verifiably true each claim in the LLM output is. - [Ground Truth Adherence](https://docs.deeprails.com/guardrails/ground-truth-adherence.md): Though it's more rarely used than other metrics, Ground Truth Adherence is very useful in some specific use cases. Situations where the model is provided a strict role to follow (the "ground truth") will need to use this metric to evaluate how well the model performed its role. - [Instruction Adherence](https://docs.deeprails.com/guardrails/instruction-adherence.md): Prompts used in production workflows often are very complex and structured with dozens of validation rules. The Instruction Adherence metric assesses each of the input rules and evaluates whether the model response followed each consistently. - [Metrics Overview](https://docs.deeprails.com/guardrails/metrics-overview.md): Explore the DeepRails Guardrail Metrics that were designed to holistically evaluate AI responses across all possible use cases. - [DeepRails Overview](https://docs.deeprails.com/index.md): The only AI reliability platform that detects hallucinations and automatically corrects them before they reach your users. - [Monitor Overview](https://docs.deeprails.com/monitor/overview.md): Monitor is the 'Airtag' for GenAI workflows - it gives you full observability over every generative AI workflow you run. It continuously scores outputs with DeepRails’ Guardrail Metrics; tracks usage, cost, latency, and failure rates in real time; and surfaces regressions before they reach your user… - [Quickstart Guide](https://docs.deeprails.com/monitor/quickstart.md): Get started with the Monitor API in minutes. - [DeepRails vs Competitors](https://docs.deeprails.com/vs-competitors.md): Understanding the fundamental difference: Detect-Only vs Detect-and-Fix ## OpenAPI Specs - [openapi](https://docs.deeprails.com/openapi.json) ## Optional - [Python SDK](https://pypi.org/project/deeprails) - [TypeScript SDK](https://www.npmjs.com/package/deeprails/) - [Ruby SDK](https://rubygems.org/gems/deeprails) - [Go SDK](https://pkg.go.dev/github.com/deeprails/deeprails-go-sdk) - [Contact](mailto:support@deeprails.ai)