Deploy autonomous agents on Kubernetes with evaluation-driven lifecycle, enterprise governance, and real-time monitoring. No vendor lock-in.
We've all seen the demo. An AI agent answers questions, calls tools, chains reasoning steps. The room is impressed. Then it reaches production.
Most tools let you deploy without measuring. An agent inventing policies, deleting databases, hallucinating answers — these aren't edge cases. They're what happens without operational discipline.
Proprietary platforms tie you to one model provider and one deployment model. Your agents become hostages to someone else's roadmap and pricing.
Open-source chat UIs lack multi-tenancy, audit trails, guardrails, and the operational discipline required for regulated environments.
Récif is infrastructure for autonomous agents. A control tower, not a conversation UI.
Récif isn't another ChatGPT wrapper. It's the control tower for autonomous agents that run in production. Deploy once, govern forever.
Agents don't live inside Récif — they live independently in their own containers. Corail agents are autonomous: they run their own runtime, their own model, their own tools. Récif is the ecosystem that connects, governs, and observes them — like a reef nurturing its corals.
Scorecards, guardrails, versioned releases, audit trails. Know what your agents do, why, and how well.
Agents are autonomous containers. Récif is the control tower that governs them. They communicate via gRPC, deploy via GitOps, and run on Kubernetes.
Every agent runs with an Istio sidecar. mTLS encryption, traffic management, canary deployments, and full observability — out of the box. No competitor offers this.
Deploy v2 on 10% of traffic. Compare scores, latency, error rates. Progressive rollout: 10% → 50% → 100%. Auto-rollback if quality degrades.
Kiali shows the service graph in real-time. See which agent talks to which DB, LLM, or tool. Latency, error rate, throughput — per agent, per version. Distributed tracing included.
Every agent-to-agent and agent-to-service communication is encrypted automatically. Zero-trust networking with zero config. Certificate rotation handled by Istio.
A/B test two models on the same agent. Blue/green deployments. Rate limiting per agent. Circuit breaker — if an agent crashes, traffic is cut instantly.
From model selection to governance, Récif handles the full lifecycle.
Ollama, Anthropic, AWS Bedrock, Vertex AI, OpenAI. Switch providers without changing code.
Anthropic-compatible skill packages. Import from GitHub, build custom, share across teams.
pgvector-powered retrieval. Connect Drive, Jira, Confluence, Databricks natively.
Real-time health, latency, token consumption, cost tracking, alerts per agent.
Scorecards, quality gates, guardrail policies, risk profiles. Enterprise compliance built-in.
Every config change is a Git commit. Immutable artifacts, diff, rollback, full audit trail.
Product teams create agents in minutes. Engineers scaffold projects with LangChain, CrewAI, AutoGen.
Deploy new versions to a subset of traffic. Evaluate with golden datasets before full rollout.
Native MCP tool support. Plus HTTP, CLI, and custom tool types. Connect GitHub, Jira, Slack, AWS, GCP, and more.
Agents communicate through any channel. Deploy once, reach everywhere.
Connect to your existing tools. Agents inherit platform integrations automatically.
From zero to governed agents in minutes, not months.
One command. Kind + Helm locally, or Terraform for cloud. The full platform spins up in minutes.
Create agents via the dashboard or define them as Kubernetes CRDs. Infrastructure as code, natively.
Monitor, evaluate, and control your agents at scale. Scorecards grade quality. Guardrails enforce policy.
From data ingestion to user feedback, every component feeds the evaluation loop. No agent ships without proof. No regression goes undetected.
Safety, Relevance, Correctness, Completeness, Fluency, Equivalence, Summarization, Guidelines, ExpectationsGuidelines, RetrievalRelevance, RetrievalGroundedness, RetrievalSufficiency, ToolCallCorrectness, ToolCallEfficiency.
LOW, MEDIUM, HIGH risk profiles select which scorers run. Governance scorecards grade 4 dimensions: Quality (35%), Safety (30%), Cost (20%), Compliance (15%). Policies enforce token limits, latency SLAs, blocked topics, daily cost caps.
Every release starts as pending_eval. Corail runs scoring async, POSTs results to a callback. If scores pass governance thresholds → approved and applied to K8s. If not → rejected and auto-rollback.
Deploy v2 on 10% of traffic. Flagger's webhook queries MLflow for live eval scores. If avg ≥ 60% → promote to 100%. If not → auto-rollback. Zero manual intervention.
User thumbs-down (score < 3/5) auto-appends the failing input to the agent's golden dataset. Expert annotations via MLflow assessments add expected outputs. Next eval run includes these cases. The agent gets better with every interaction.
A pluggable ingestion pipeline that transforms documents into searchable vector embeddings. From raw PDF to agent-ready knowledge in one command.
Each stage is replaceable. Swap Docling for Tika, switch from pgvector to Pinecone, add custom processors. The pipeline adapts to your stack.
Connect to where your knowledge already lives.
IBM's Docling handles complex documents: PDFs with tables, scanned images with OCR, DOCX with embedded formatting. Production-grade extraction, not toy parsing.
Honest comparison with real competitors. Récif doesn't compete with chat UIs — it's a different category.
| Feature | Récif | Dify | LibreChat | OpenWebUI | CrewAI | Gemini Enterprise |
|---|---|---|---|---|---|---|
| Autonomous Agents | ✓ | ✓ | ~ | ~ | ✓ | ✓ |
| Eval-Gated Releases | ✓ | ✕ | ✕ | ✕ | ✕ | ~ |
| Canary Deployments | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ |
| Service Mesh (Istio) | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ |
| K8s CRDs + Operator | ✓ | ✕ | ✕ | ~ | ✕ | ~ |
| GitOps Releases | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ |
| Governance & Scorecards | ✓ | ~ | ✕ | ✕ | ✕ | ✓ |
| Multi-Model (8+ providers) | ✓ | ✓ | ✓ | ✓ | ✓ | ~ |
| MCP Tools | ✓ | ✓ | ✓ | ~ | ✓ | ✓ |
| RAG / Knowledge Base | ✓ | ✓ | ~ | ✓ | ✓ | ✓ |
| Agent Memory | ✓ | ~ | ✕ | ✕ | ✓ | ✓ |
| Visual Workflow Builder | ✕ | ✓ | ✕ | ✕ | ~ | ~ |
| Multi-Tenant | ✓ | ✓ | ~ | ~ | ~ | ✓ |
| Open Source | ✓ | ✓ | ✓ | ✓ | ✓ | ✕ |
| Self-Hosted | ✓ | ✓ | ✓ | ✓ | ✓ | ✕ |
Based on real feedback from teams evaluating Dify, LibreChat, OpenWebUI, CrewAI, and Gemini Enterprise.
LibreChat forces N separate instances — one per team, no shared governance, no cost visibility. Récif runs as a single platform with namespace-per-team isolation, centralized governance, and per-agent cost tracking.
Other platforms force you to recondition agents into MCP tools. Récif agents live in their own containers — bring your own framework, your own code, your own tools. The control bridge connects everything.
Not bolted on — built in. Scorecards, guardrails, quality gates, versioned releases with Git audit trail. Know what your agents do, how well, and how much they cost.
The enterprise vision — a governed, centralized agent platform — without vendor lock-in. Any model (Ollama, Anthropic, Bedrock, Vertex), any cloud, Apache 2.0 license.
Product teams create agents in minutes with the no-code wizard. Engineering teams scaffold custom projects with their framework of choice. Both coexist in one platform.
Platform admins govern everything. Team admins manage their agents. Developers build and deploy. Viewers observe. Namespace isolation ensures no team can affect another.
Deploy Récif in minutes. Join the community building the future of autonomous AI infrastructure.