Skip to main content

Improvement Tools

When an output fails to meet the thresholds defined in your workflow, Defend automatically applies the remediation strategy you selected at workflow creation. There are three options.

FixIt

FixIt is Defend’s targeted correction strategy. Rather than discarding the original output and generating a new one, FixIt attempts to repair it.How it works:
  1. Defend evaluates the output and identifies which guardrail metrics failed and why. The evaluation produces a per-metric rationale — a description of the specific factual errors, omissions, or adherence failures.
  2. FixIt uses the original prompt, the failed output, and the evaluation rationale to construct a repair prompt. This prompt instructs the model to correct only the identified failures while preserving everything else in the response.
  3. The repaired output is re-evaluated against the workflow’s guardrails. If it passes, it is returned as the final output. If it fails again, the cycle repeats until either the output passes or the workflow’s retry limit is reached.
When to use FixIt:
  • Outputs that are mostly correct but have isolated factual errors or omissions
  • Use cases where preserving the original tone, format, or structure matters
  • Domains where targeted correction is preferable to full regeneration (e.g., long-form documents, code, structured data)
Tradeoffs: FixIt uses more tokens per attempt than ReGen because it carries the failure rationale and the prior output into the repair prompt. In exchange, corrections tend to be more surgical and the output style stays consistent.

ReGen

ReGen discards the failed output and generates a fresh response from the original prompt.How it works:
  1. Defend evaluates the output and determines it fails one or more guardrail thresholds.
  2. ReGen submits the original prompt again to the model with modified sampling parameters — typically increased temperature — to introduce controlled variance. This avoids regenerating the same failure.
  3. The new output is evaluated against the workflow’s guardrails. If it passes, it is returned. If it fails, the cycle repeats until the output passes or the retry limit is reached.
When to use ReGen:
  • Outputs where the root cause of failure is systemic (the model fundamentally got the task wrong, not just a detail)
  • Use cases where a fresh attempt is more likely to succeed than incremental repair
  • Short outputs where regeneration is cheap relative to FixIt’s repair cost
  • Situations where preserving the original output structure is less important
Tradeoffs: ReGen is token-efficient per attempt because it does not carry failure context. However, it provides less control over what changes between attempts — the model may fix one failure and introduce another. For structured or long-form outputs, FixIt typically produces more predictable results.

Do Nothing

Do Nothing records the failed output without attempting remediation.How it works:Defend evaluates the output, records the failure (including which metrics failed, their scores, and the evaluation rationale), and returns the failed output to your application. No repair or regeneration is attempted.When to use Do Nothing:
  • You want to monitor output quality without blocking or modifying outputs (observability mode)
  • Your application handles failures downstream and does not need Defend to remediate
  • You are baselining your current output quality before configuring active remediation
  • Use cases where any AI output — even imperfect — is preferable to a retry delay
Note: Even with Do Nothing, every evaluation is logged in the DeepRails Console under the workflow’s Data tab. You get full visibility into failure rates and rationales without incurring the latency cost of remediation.

Retry Logic

When FixIt or ReGen is active, Defend will attempt remediation up to the retry limit configured in your workflow.
  • The default retry limit is 3 attempts (1 initial evaluation + 2 remediation attempts).
  • Each attempt is logged independently in the workflow’s evaluation history, including its guardrail scores, pass/fail status, and the output at that attempt.
  • If the output still fails after all retries are exhausted, Defend returns the best-scoring output from all attempts, along with a failed status and the full retry history.
  • The retry limit can be configured between 1 and 5 in the workflow creation wizard.
Retry cost considerations: Each retry attempt consumes evaluation tokens (for the guardrail scoring) and generation tokens (for FixIt’s repair or ReGen’s regeneration). High retry limits combined with complex outputs and precision run modes will increase per-event cost. Monitor your workflow’s cost-per-event in the Console Metrics tab to calibrate.

Adaptive vs. Custom Thresholds

Thresholds define the score cutoff below which an output is treated as a hallucination and triggers remediation. Defend supports two threshold modes.

Adaptive Thresholds

Adaptive thresholds are set by selecting a hallucination tolerance level at workflow creation: Low, Medium, or High.
ToleranceBehavior
LowStrictest. Flags outputs at higher guardrail scores. Best for regulated, high-stakes, or safety-critical use cases. Expect more remediation events.
MediumBalanced. Flags outputs with meaningful quality issues while tolerating minor imperfections. Good default for most production use cases.
HighMost permissive. Only flags outputs with significant quality failures. Best for use cases where speed and throughput matter more than perfect accuracy on every output.
Adaptive thresholds adjust automatically as Defend learns the distribution of your workflow’s outputs over time, maintaining the selected tolerance level even as output quality changes.

Custom Thresholds

Custom thresholds let you define explicit numeric cutoff values for each guardrail metric. For example, you can require a correctness score above 0.85 while allowing completeness scores as low as 0.60 for a use case where partial answers are acceptable.Custom thresholds are available on SME and Enterprise plans. They are configured per-metric in the workflow creation wizard and can be updated in the Manage Workflows tab without recreating the workflow.

Run Modes and Their Effect on Remediation

The run mode you select at workflow creation affects both the accuracy of guardrail evaluations and the quality of FixIt repairs and ReGen outputs.
Run ModeSpeedAccuracyBest For
Super FastUltrafastBasicMaximum throughput, minimal latency, lowest-stakes use cases
FastFastestGoodHigh-throughput, low-stakes use cases
PrecisionModerateHighMost production use cases
Precision CodexModerateHigh (code-optimized)Code generation and technical output
Precision MaxSlowerHighestRegulated, safety-critical, or audit-grade use cases
Precision Max CodexSlowerHighest (code-optimized)High-stakes code generation
Higher-accuracy run modes use more capable reasoning models for both evaluation and remediation. This means guardrail scores are more reliable and FixIt repairs are more accurate — but at higher token cost and latency. For most use cases, Precision is the right default.

Monitoring Defend in the Console

Every workflow produces a full audit trail visible in the DeepRails Console:
  • Metrics tab: Aggregate guardrail scores, hallucination filter rate, improvement success rate, and before/after score distributions across all events in the workflow.
  • Data tab: Event-level view of every evaluation run — input, output, guardrail scores, status (pass/fail/remediated), model, improvement attempt history, and cost metadata.
  • Manage Workflows: Workflow configuration details, throughput statistics, and threshold/tolerance settings. Thresholds and improvement tools can be updated without recreating the workflow.
Use the Console to calibrate your retry limits and thresholds over time. If your improvement success rate is low (Defend is consistently exhausting retries without passing), consider switching run modes or tightening your prompt before the output reaches Defend.