Parry sits between your AI agents and the LLMs they call. It detects prompt injection, tool misuse, and behavioural drift in real time — and blocks the dangerous ones before they fire.
# before
from openai import OpenAI
client = OpenAI()
# after — that's it
from parry.wrappers.openai import SentinelOpenAI
client = SentinelOpenAI(agent_id="support-bot") How it works
pip install parry. Two-line setup. Works with OpenAI, Anthropic, LangChain, CrewAI, AutoGen, LlamaIndex, and Pydantic AI.
Replace your LLM client with the Parry wrapper. Every call is intercepted, classified, and shipped to the detection engine.
Live block feed, fleet health, behavioural graphs, session replay. Configure policies and alerts as your agents grow.
What you get
Built around the way real agents fail: silent drift, opaque tool calls, and prompt injection that only shows up in production.
Pattern, classifier, and LLM-fallback detectors run on every call. HIGH and CRITICAL triggers can block before the LLM is hit.
Org-defined allowlists, blocked domains, and budget limits — enforced inline. Detect tool misuse before it touches production data.
Per-agent token, latency, and tool-call baselines. Anomaly detection flags drift from real production behaviour, not synthetic benchmarks.
Reconstruct any session call-by-call with detection results inline. Full prompt and response for admins, previews for everyone else.
Slack, email, webhooks, PagerDuty, Opsgenie — gated by severity, auditable, and never noisy.
Generate SOC 2 / EU AI Act-ready PDFs over any window. Counts and policy posture, never raw prompts — GDPR-safe by design.
Pricing
Try Parry against a single agent.
For teams shipping AI features.
For production fleets.
Need Enterprise? Unlimited agents, on-prem deployment, SOC 2 controls — talk to us.
Free tier covers the first agent and the first 10,000 events. No credit card.