Introduction
Récif is the open-source platform to govern, deploy, and operate AI agents — any framework, any cloud.
What is Récif?
Récif is an open-source agentic platform that separates how you build agents from how you deploy and manage them. It provides enterprise-grade governance, evaluation, and operations for AI agents — regardless of the framework or cloud provider you use.
The platform is built on two independent layers:
- Corail (the coral) — an autonomous Python agent runtime. Each agent runs in its own container with its own LLM, tools, memory, and channels. Corail works standalone — no platform required.
- Récif (the reef) — the governance and operations layer. A Go API + Next.js dashboard that adds evaluation, releases, governance, monitoring, and multi-tenancy on top of your agents.
Tip
Think of it like Docker and Kubernetes. Docker (Corail) runs containers independently. Kubernetes (Récif) orchestrates and governs them at scale. You can use one without the other, but together they're powerful.
Two Access Paths — At the Same Time
This is the key insight: Corail agents are autonomous. They don't need Récif to run. But they can also be governed by Récif. Both happen simultaneously on the same agent — this is not a choice between one or the other.
Direct Access — Developer / Integration
Your application talks directly to the Corail agent. No platform in between. This is ideal for:
- Developers building an AI feature into their app
- API integrations — your backend calls the agent's REST API
- Slack/Discord bots — the agent connects its own channel
- CI/CD pipelines — automated agents triggered by events
Your App → Corail Agent (port 8000) → LLM → ResponseEach agent exposes its own REST API, SSE streaming, and channel connectors. It handles its own tools, memory, and knowledge bases. Zero dependency on the platform.
Platform Access — Enterprise / Mass Adoption
At the same time, users can interact through the Récif dashboard or API. The platform proxies requests to the same agents and adds governance, monitoring, and evaluation. This is ideal for:
- Enterprise adoption — give every employee access to AI agents through a single portal
- Multi-team environments — each team has its own namespace, agents, and permissions
- Production governance — eval-gated releases, canary deployments, quality gates
- Compliance — audit trails, cost tracking, guardrail policies
User → Récif Dashboard/API (port 8080) → proxy → Corail Agent → LLM → Response
│
└── Governance, Eval, Monitoring, Feedback, ReleasesThe platform adds:
- Authentication & RBAC — who can access which agents
- Evaluation pipeline — 14 MLflow scorers gate every release
- Canary deployments — 10% → 50% → 100% with quality gates
- Feedback loop — user thumbs up/down feeds back into evaluation datasets
- AI Radar — fleet-wide health monitoring and alerts
- GitOps releases — every config change is a versioned, auditable artifact
Both Paths, One Agent
This is not a choice — both paths coexist on the same agent, at the same time. A CI/CD pipeline calls the agent directly via REST while an employee chats with it through the dashboard. The agent doesn't know or care which path the request came from. Récif layers governance on top without touching the agent code.
| Direct (Corail only) | Platform (Récif + Corail) | |
|---|---|---|
| Setup | docker run corail | helm install recif |
| Auth | None or custom | JWT, RBAC, multi-tenant |
| Channels | REST, Slack, CLI, WebSocket | All channels + Dashboard UI |
| Evaluation | Manual | 14 automated scorers, eval-gated releases |
| Governance | None | Scorecards, policies, risk profiles |
| Monitoring | Logs | AI Radar, MLflow traces, cost tracking |
| Releases | Manual deploy | GitOps, canary, auto-rollback |
| Multi-tenant | Single namespace | Namespace-per-team, RBAC |
| Best for | Developers, integrations | Enterprise, compliance, mass adoption |
Note
You don't have to choose upfront. Start with Corail standalone, add Récif when you need governance. The agent code doesn't change.
Architecture at a Glance
Each agent runs as a Kubernetes Pod with its own Corail runtime. Récif manages them through a kubebuilder operator that reconciles Agent CRDs into Deployments, Services, and ConfigMaps.
Core Capabilities
| Capability | Description |
|---|---|
| Autonomous Agents | Each agent is a standalone container — runs independently with its own LLM, tools, memory |
| 7 LLM Providers | OpenAI, Anthropic, Google AI, Vertex AI, Ollama, Bedrock, custom |
| Multi-Channel | REST API, Slack, Google Chat, CLI, WebSocket — per agent |
| Evaluation Pipeline | 14 MLflow scorers, golden datasets, LLM-as-judge, eval-gated releases |
| Canary Deployments | Champion/challenger with Flagger quality gates, auto-promote/rollback |
| Governance | 4-dimension scorecards, guardrail policies, risk profiles |
| Multi-Tenancy | Namespace-per-team, RBAC (4 roles), resource isolation |
| GitOps Releases | Immutable YAML artifacts in Git, full audit trail, ArgoCD-ready |
| Knowledge Bases | Marée ingestion pipeline, pgvector, Docling extraction |
| AI Radar | Fleet health monitoring, drift detection, cost tracking |
| Framework Agnostic | ADK, LangChain, CrewAI, or bring your own |
| MCP Tools | Native MCP support alongside HTTP, CLI, and builtin tools |
Next Steps
- Quickstart — install Récif and deploy your first agent in 10 minutes
- Architecture — deep dive into the two-layer architecture
- LLM Providers — configure OpenAI, Ollama, Vertex AI, and more
- Evaluation Guide — the eval-driven lifecycle explained