Detect LLM
Trace every LLM call, evaluate output alignment with your system prompts, and get real-time alerts when your AI goes off-script. Simple SDK — integrate in under 5 minutes.
# pip install hallutraceai
from hallutraceai import HalluTrace
ht = HalluTrace(api_key="sk_live_...")
ht.trace(
session_id="chat-123",
type="agent",
input="What is Python?",
output="Python is a programming language.",
system_prompt="You are a helpful assistant."
)
# That's it. We handle evaluation automatically.Everything you need to trust your LLM
From trace ingestion to hallucination scoring to real-time alerts — one platform to monitor and evaluate your AI outputs.
Real-Time Tracing
Capture every LLM call — inputs, outputs, system prompts, model names. Grouped by chat session automatically.
Hallucination Detection
LLM-as-judge evaluates if outputs align with your system prompts. Scores from 0 (perfect) to 100 (hallucinated).
Instant Alerts
Get notified via email, SMS, or webhook when hallucination scores exceed your threshold. Default at 50.
Rich Analytics
Score trends, distributions, model comparisons, session breakdowns — all with animated, interactive charts.
CSV Data Tables
No SDK? Upload CSV files with your LLM data and run hallucination checks directly from the dashboard.
Simple Integration
3 lines of Python. Or use our REST API. Or swap your OpenAI base URL. Works with any LLM provider.
See hallucinations at a glance
Rich analytics, score distributions, and session breakdowns — all animated and interactive.
Project: My AI Chatbot
Last 7 days overview
Avg Score
12.4
Sessions
342
Flagged
18
Messages
2.8K
Score Distribution
| Session | Messages | Avg Score |
|---|---|---|
| chat-a1b2 | 12 | 8 |
| chat-c3d4 | 5 | 24 |
| chat-e5f6 | 8 | 62 |
| chat-g7h8 | 3 | 15 |
Three steps to hallucination-free AI
Integrate SDK
Install our Python or JS SDK. Add 3 lines of code. Every LLM call is now traced — inputs, outputs, system prompts, and metadata.
Auto Evaluate
Our engine automatically scores each response for hallucination. LLM-as-judge checks alignment with your system prompt. Score 0-100.
Monitor & Alert
View scores in your dashboard with rich charts. Set thresholds. Get instant alerts via email, SMS, or webhook when things go wrong.
Simple, transparent pricing
Start free. Scale as you grow. Only pay for evaluations you use.
Free
Perfect for trying out hallucination detection.
Start Free- 10,000 evals / month
- 3 projects
- 7-day data retention
- Basic dashboard & charts
- Email alerts
- 1 team member
Pay as You Go
Scale without limits. Pay only for what you use.
Get Started- Unlimited evals
- Unlimited projects
- 90-day data retention
- Full analytics & charts
- Email, SMS, webhook alerts
- 5 team members
- JSON export API
- CSV data tables
- Priority support
Enterprise
For teams that need full control and SLA.
Contact Sales- Everything in Pay as You Go
- Unlimited retention
- Unlimited team members
- SSO / SAML
- Custom eval prompts
- Dedicated support & SLA
- On-premise option
- Custom integrations
That's just $1 per 1,000 evaluations. No hidden fees. No minimum commitment.
Integrate in under 5 minutes
Three lines of code. That's all it takes to start detecting hallucinations in your LLM outputs.
from hallutraceai import HalluTrace
ht = HalluTrace(api_key="sk_live_your_key")
ht.trace(
session_id="chat-123",
type="agent",
input="What is the capital of France?",
output="The capital of France is Paris.",
system_prompt="You are a geography assistant."
)Stop hallucinations before your users notice
Join teams using HalluTrace AI to monitor, evaluate, and improve their LLM outputs. Start with 10,000 free evaluations every month.