v0.1.0 · Apache 2.0

Search docs...

Introduction

Récif is the open-source platform to govern, deploy, and operate AI agents — any framework, any cloud.

5 min read

What is Récif?

Récif is an open-source agentic platform that separates how you build agents from how you deploy and manage them. It provides enterprise-grade governance, evaluation, and operations for AI agents — regardless of the framework or cloud provider you use.

The platform is built on two independent layers:

  • Corail (the coral) — an autonomous Python agent runtime. Each agent runs in its own container with its own LLM, tools, memory, and channels. Corail works standalone — no platform required.
  • Récif (the reef) — the governance and operations layer. A Go API + Next.js dashboard that adds evaluation, releases, governance, monitoring, and multi-tenancy on top of your agents.

Tip

Think of it like Docker and Kubernetes. Docker (Corail) runs containers independently. Kubernetes (Récif) orchestrates and governs them at scale. You can use one without the other, but together they're powerful.

Two Access Paths — At the Same Time

This is the key insight: Corail agents are autonomous. They don't need Récif to run. But they can also be governed by Récif. Both happen simultaneously on the same agent — this is not a choice between one or the other.

Both Access Paths

Direct Access — Developer / Integration

Your application talks directly to the Corail agent. No platform in between. This is ideal for:

  • Developers building an AI feature into their app
  • API integrations — your backend calls the agent's REST API
  • Slack/Discord bots — the agent connects its own channel
  • CI/CD pipelines — automated agents triggered by events
Your App Corail Agent (port 8000) → LLM → Response

Each agent exposes its own REST API, SSE streaming, and channel connectors. It handles its own tools, memory, and knowledge bases. Zero dependency on the platform.

Platform Access — Enterprise / Mass Adoption

At the same time, users can interact through the Récif dashboard or API. The platform proxies requests to the same agents and adds governance, monitoring, and evaluation. This is ideal for:

  • Enterprise adoption — give every employee access to AI agents through a single portal
  • Multi-team environments — each team has its own namespace, agents, and permissions
  • Production governance — eval-gated releases, canary deployments, quality gates
  • Compliance — audit trails, cost tracking, guardrail policies
User Récif Dashboard/API (port 8080) → proxy → Corail Agent → LLM → Response

              └── Governance, Eval, Monitoring, Feedback, Releases

The platform adds:

  • Authentication & RBAC — who can access which agents
  • Evaluation pipeline — 14 MLflow scorers gate every release
  • Canary deployments — 10% → 50% → 100% with quality gates
  • Feedback loop — user thumbs up/down feeds back into evaluation datasets
  • AI Radar — fleet-wide health monitoring and alerts
  • GitOps releases — every config change is a versioned, auditable artifact

Both Paths, One Agent

This is not a choice — both paths coexist on the same agent, at the same time. A CI/CD pipeline calls the agent directly via REST while an employee chats with it through the dashboard. The agent doesn't know or care which path the request came from. Récif layers governance on top without touching the agent code.

Direct (Corail only)Platform (Récif + Corail)
Setupdocker run corailhelm install recif
AuthNone or customJWT, RBAC, multi-tenant
ChannelsREST, Slack, CLI, WebSocketAll channels + Dashboard UI
EvaluationManual14 automated scorers, eval-gated releases
GovernanceNoneScorecards, policies, risk profiles
MonitoringLogsAI Radar, MLflow traces, cost tracking
ReleasesManual deployGitOps, canary, auto-rollback
Multi-tenantSingle namespaceNamespace-per-team, RBAC
Best forDevelopers, integrationsEnterprise, compliance, mass adoption

Note

You don't have to choose upfront. Start with Corail standalone, add Récif when you need governance. The agent code doesn't change.

Architecture at a Glance

Architecture Overview

Each agent runs as a Kubernetes Pod with its own Corail runtime. Récif manages them through a kubebuilder operator that reconciles Agent CRDs into Deployments, Services, and ConfigMaps.

Core Capabilities

CapabilityDescription
Autonomous AgentsEach agent is a standalone container — runs independently with its own LLM, tools, memory
7 LLM ProvidersOpenAI, Anthropic, Google AI, Vertex AI, Ollama, Bedrock, custom
Multi-ChannelREST API, Slack, Google Chat, CLI, WebSocket — per agent
Evaluation Pipeline14 MLflow scorers, golden datasets, LLM-as-judge, eval-gated releases
Canary DeploymentsChampion/challenger with Flagger quality gates, auto-promote/rollback
Governance4-dimension scorecards, guardrail policies, risk profiles
Multi-TenancyNamespace-per-team, RBAC (4 roles), resource isolation
GitOps ReleasesImmutable YAML artifacts in Git, full audit trail, ArgoCD-ready
Knowledge BasesMarée ingestion pipeline, pgvector, Docling extraction
AI RadarFleet health monitoring, drift detection, cost tracking
Framework AgnosticADK, LangChain, CrewAI, or bring your own
MCP ToolsNative MCP support alongside HTTP, CLI, and builtin tools

Next Steps