Open Source · Apache 2.0

Govern Your AI Agents at Scale with Récif

Deploy autonomous agents on Kubernetes with evaluation-driven lifecycle, enterprise governance, and real-time monitoring. No vendor lock-in.

Demo agents are easy.
Production agents are a different story.

We've all seen the demo. An AI agent answers questions, calls tools, chains reasoning steps. The room is impressed. Then it reaches production.

No Quality Gates

Most tools let you deploy without measuring. An agent inventing policies, deleting databases, hallucinating answers — these aren't edge cases. They're what happens without operational discipline.

Vendor Lock-in

Proprietary platforms tie you to one model provider and one deployment model. Your agents become hostages to someone else's roadmap and pricing.

Too Light for Enterprise

Open-source chat UIs lack multi-tenancy, audit trails, guardrails, and the operational discipline required for regulated environments.

Not another chat wrapper.

Récif is infrastructure for autonomous agents. A control tower, not a conversation UI.

Not a Chat UI

Récif isn't another ChatGPT wrapper. It's the control tower for autonomous agents that run in production. Deploy once, govern forever.

Agents Are First-Class

Agents don't live inside Récif — they live independently in their own containers. Corail agents are autonomous: they run their own runtime, their own model, their own tools. Récif is the ecosystem that connects, governs, and observes them — like a reef nurturing its corals.

Enterprise-Grade Governance

Scorecards, guardrails, versioned releases, audit trails. Know what your agents do, why, and how well.

RECIF PLATFORMThe reef that governsPER-AGENT LIFECYCLECode Reviewerv8Gov92Eval94Relv8HR Assistant v1champion90%Gov95Eval88Relv5Security Scannerv3Gov87Eval91Relv3HR Assistant v2rejectedstoppedGov85Eval52Relv2Personal Assistantv6Gov89Eval82Relv6HR Assistant v3challenger10%Gov98EvalongoingRelv3SHARED SERVICESGovernancePolicies4 activeScorecardsfleet healthRisk ProfilesLOW · MED · HIGHEvaluationMLflow Servertracking14 ScorersregistryLLM Judgesgpt-4o-miniSecurityRBAC4 rolesMulti-Tenantns isolationIdentityJWT · OAuth · SAMLOpsOperator5 CRDsHelm + GitOpsrecif-stateIstio MeshmTLS · trafficDashboardChat5 agents liveMarketplace18 skillsSettingsteams · configMonitoringAI Radar2 alertsTracesMLflowIntegrationsMCP · HTTPTerraformauto-deployCode ReviewerADKclaude-sonnet-4Tools · GuardsMemory · SkillsHR Assistant v1Corailchampion90%gpt-4oRAG · KnowledgeMemory · PII GuardHR Assistant v2Corailrejectedstoppedgpt-4oRAG · KnowledgeMemory · PII GuardupgradeHR Assistant v3Corailchallenger10%gpt-4oRAG · Knowledge · EvalMemory · PII Guard · FeedbackSecurity ScannerCrewAIclaude-sonnet-4Multi-agent crewGuards · AuditPersonal AssistantOpenClawgpt-4o-miniTools · Skills · RAGMemory · SuggestionsGitHub🤖CI/CDSlack👤Employeescanary splitSlack 10%👤10% testTerminal🤖PipelineChat UI👤Employee✓ champion◎ challenger (canary)✗ rejected
Architecture

How it all connects.

Agents are autonomous containers. Récif is the control tower that governs them. They communicate via gRPC, deploy via GitOps, and run on Kubernetes.

End Users & Channels
Slack
REST API
Google Chat
WebSocket
Dashboard
HTTP / WebSocket / Events
Corail Agents (autonomous containers)
Code Reviewer
claude-sonnet-4
Tools Skills Guards
Ticket Triage
gpt-4o
RAG Memory
Data Analyst
llama-3.3
Tools AG-UI
Security Scanner
claude-sonnet-4
Skills Guards
gRPC Control Bridge
Récif Platform (control tower)
Dashboard
Next.js
REST API
Go / Chi
Operator
kubebuilder
Governance
Scorecards
AI Radar
Monitoring
Releases
Git-backed
SQL / K8s API / GitHub API
Infrastructure
PostgreSQL
pgvector
Kubernetes
CRDs + Istio
Ollama
Local models
recif-state
Git repo
OCI Registry
Images
Istio Service Mesh

Built-in service mesh. Zero config.

Every agent runs with an Istio sidecar. mTLS encryption, traffic management, canary deployments, and full observability — out of the box. No competitor offers this.

Kiali — Service Graph (Canary Deployment)
UsersSlack, REST, WSrecif-apiControl TowerIstio VirtualServiceTrafficSplitagent-v190% traffic — stableclaude-sonnet-4agent-v210% canary — testingclaude-sonnet-4 (new prompt)PostgreSQLpgvectorOllamaLLM Provider90%10%LIVE METRICSv1: score 97.112ms avg0.1% errorsv2: score 94.318ms avg0.3% errors
Live Kiali graph showing canary deployment — v2 receives 10% of traffic while v1 handles 90%.

Canary Deployments

Deploy v2 on 10% of traffic. Compare scores, latency, error rates. Progressive rollout: 10% → 50% → 100%. Auto-rollback if quality degrades.

Full Observability

Kiali shows the service graph in real-time. See which agent talks to which DB, LLM, or tool. Latency, error rate, throughput — per agent, per version. Distributed tracing included.

mTLS Encryption

Every agent-to-agent and agent-to-service communication is encrypted automatically. Zero-trust networking with zero config. Certificate rotation handled by Istio.

Traffic Management

A/B test two models on the same agent. Blue/green deployments. Rate limiting per agent. Circuit breaker — if an agent crashes, traffic is cut instantly.

Everything you need for production agents.

From model selection to governance, Récif handles the full lifecycle.

Multi-Model Runtime

Ollama, Anthropic, AWS Bedrock, Vertex AI, OpenAI. Switch providers without changing code.

Skills System

Anthropic-compatible skill packages. Import from GitHub, build custom, share across teams.

Knowledge Base & RAG

pgvector-powered retrieval. Connect Drive, Jira, Confluence, Databricks natively.

AI Radar & Monitoring

Real-time health, latency, token consumption, cost tracking, alerts per agent.

Governance & Guardrails

Scorecards, quality gates, guardrail policies, risk profiles. Enterprise compliance built-in.

GitOps Releases

Every config change is a Git commit. Immutable artifacts, diff, rollback, full audit trail.

No-Code + Custom Dev

Product teams create agents in minutes. Engineers scaffold projects with LangChain, CrewAI, AutoGen.

Canary Deployments & Evaluation

Deploy new versions to a subset of traffic. Evaluate with golden datasets before full rollout.

MCP & Integrations

Native MCP tool support. Plus HTTP, CLI, and custom tool types. Connect GitHub, Jira, Slack, AWS, GCP, and more.

Connect everywhere. Integrate everything.

Communication Channels

Agents communicate through any channel. Deploy once, reach everywhere.

REST API Slack Google Chat WebSocket Custom

Platform Integrations

Connect to your existing tools. Agents inherit platform integrations automatically.

GitHub Jira Jenkins Slack AWS GCP Datadog Terraform

Three steps to production agents.

From zero to governed agents in minutes, not months.

1

Deploy

One command. Kind + Helm locally, or Terraform for cloud. The full platform spins up in minutes.

# Local development cd deploy/kind && bash setup.sh # Cloud (AWS) cd deploy/terraform && terraform apply
2

Create

Create agents via the dashboard or define them as Kubernetes CRDs. Infrastructure as code, natively.

apiVersion: recif.io/v1 kind: Agent metadata: name: code-reviewer spec: model: claude-sonnet-4 provider: anthropic skills: - github-review - code-analysis
3

Govern

Monitor, evaluate, and control your agents at scale. Scorecards grade quality. Guardrails enforce policy.

Agent Fleet Overview
24 Agents 22 Healthy 1 Degraded 1 Down
code-reviewer97.112ms
ticket-triage93.445ms
data-analyst88.7230ms

Evaluation is not a step. It's the architecture.

From data ingestion to user feedback, every component feeds the evaluation loop. No agent ships without proof. No regression goes undetected.

End-to-End Lifecycle
Ingest
Marée pulls docs from Drive, Jira, Confluence, S3
Dataset
Golden datasets with expected outputs + RAG context
Evaluate
14 MLflow scorers + LLM-as-judge per risk profile
Quality Gate
Governance scorecards block deploy if below threshold
Release
GitOps artifact: pending_eval → approved or rejected
Canary
10% traffic, Flagger webhook checks eval scores
Production
Live monitoring, sample-rate eval on real traffic
Feedback
User & expert annotations feed back into datasets
Negative feedback auto-appends to golden datasets — the loop never stops
Eval Run — code-reviewer v3 — Risk Profile: HIGH
14 Scorers — MLflow GenAI
Safety 98.2
Relevance 95.7
Correctness 91.4
Groundedness 93.1
Tool Accuracy 87.5
Cost $0.003
APPROVED — avg 94.3 ≥ threshold 90
Release v3 committed to recif-state. Applied to K8s CRD.
Feedback Loop — Live
👎
User rated 2/5 on trace tr_8f2k
"Wrong answer about leave policy"
Auto-appended to dataset
🔍
Expert annotation on trace tr_3m1n
Expected: "Employees get 25 days PTO + 10 sick days"
MLflow assessment
📈
Production sample — 10% eval rate
12 traces scored in last hour. Avg safety: 97.8
AI Radar

14 MLflow Scorers

Safety, Relevance, Correctness, Completeness, Fluency, Equivalence, Summarization, Guidelines, ExpectationsGuidelines, RetrievalRelevance, RetrievalGroundedness, RetrievalSufficiency, ToolCallCorrectness, ToolCallEfficiency.

+ Custom LLM-as-Judge + Register Your Own

Risk Profiles & Governance

LOW, MEDIUM, HIGH risk profiles select which scorers run. Governance scorecards grade 4 dimensions: Quality (35%), Safety (30%), Cost (20%), Compliance (15%). Policies enforce token limits, latency SLAs, blocked topics, daily cost caps.

Eval-Gated Releases

Every release starts as pending_eval. Corail runs scoring async, POSTs results to a callback. If scores pass governance thresholds → approved and applied to K8s. If not → rejected and auto-rollback.

Canary + Flagger Quality Gate

Deploy v2 on 10% of traffic. Flagger's webhook queries MLflow for live eval scores. If avg ≥ 60% → promote to 100%. If not → auto-rollback. Zero manual intervention.

Feedback → Dataset → Re-eval

User thumbs-down (score < 3/5) auto-appends the failing input to the agent's golden dataset. Expert annotations via MLflow assessments add expected outputs. Next eval run includes these cases. The agent gets better with every interaction.

Marée — feed your agents with knowledge.

A pluggable ingestion pipeline that transforms documents into searchable vector embeddings. From raw PDF to agent-ready knowledge in one command.

maree ingest --source drive --kb hr-knowledge
Source
Pull from Drive, Jira, Confluence, S3, Databricks, or local files
PDF DOCX HTML CSV
Processor
Extract text, tables, and images with Docling. OCR included.
Transformer
Chunk, clean, and prepare for embedding. Smart splitting preserves context.
Store
Embed with Ollama and store in PostgreSQL + pgvector. Ready for RAG.

Pluggable Pipeline

Each stage is replaceable. Swap Docling for Tika, switch from pgvector to Pinecone, add custom processors. The pipeline adapts to your stack.

Enterprise Connectors

Connect to where your knowledge already lives.

Google Drive Jira Confluence Databricks S3 Local Files

Docling-Powered Extraction

IBM's Docling handles complex documents: PDFs with tables, scanned images with OCR, DOCX with embedded formatting. Production-grade extraction, not toy parsing.

One Command

maree ingest \ --source drive \ --kb hr-knowledge \ --embedder ollama

How Récif compares.

Honest comparison with real competitors. Récif doesn't compete with chat UIs — it's a different category.

FeatureRécifDifyLibreChatOpenWebUICrewAIGemini Enterprise
Autonomous Agents~~
Eval-Gated Releases~
Canary Deployments
Service Mesh (Istio)
K8s CRDs + Operator~~
GitOps Releases
Governance & Scorecards~
Multi-Model (8+ providers)~
MCP Tools~
RAG / Knowledge Base~
Agent Memory~
Visual Workflow Builder~~
Multi-Tenant~~~
Open Source
Self-Hosted

Why enterprises choose Récif.

Based on real feedback from teams evaluating Dify, LibreChat, OpenWebUI, CrewAI, and Gemini Enterprise.

🏢

One Platform, All Teams

LibreChat forces N separate instances — one per team, no shared governance, no cost visibility. Récif runs as a single platform with namespace-per-team isolation, centralized governance, and per-agent cost tracking.

🔓

No MCP Lock-in

Other platforms force you to recondition agents into MCP tools. Récif agents live in their own containers — bring your own framework, your own code, your own tools. The control bridge connects everything.

🛡️

Built-in Governance

Not bolted on — built in. Scorecards, guardrails, quality gates, versioned releases with Git audit trail. Know what your agents do, how well, and how much they cost.

🌊

Open Source Gemini Enterprise

The enterprise vision — a governed, centralized agent platform — without vendor lock-in. Any model (Ollama, Anthropic, Bedrock, Vertex), any cloud, Apache 2.0 license.

🎯

No-Code + Custom Dev

Product teams create agents in minutes with the no-code wizard. Engineering teams scaffold custom projects with their framework of choice. Both coexist in one platform.

👥

Multi-Tenant RBAC

Platform admins govern everything. Team admins manage their agents. Developers build and deploy. Viewers observe. Namespace isolation ensures no team can affect another.

Built for everyone. Loved by engineers.

🚀

For Everyone — Mass Adoption

  • Give every team access to AI agents without managing infrastructure
  • No-code agent creation for product managers, analysts, support teams
  • Marketplace of ready-to-use agents and skills
  • Central governance ensures compliance without slowing teams down
  • Cost tracking and budgets per team
  • Chat with any agent directly from the dashboard
⚙️

For Engineers — Platform Teams

  • Deploy custom agents with your own code, any framework
  • GitOps-native: every change is a commit, every deploy is traceable
  • Kubernetes-native: CRDs, operators, Helm charts, namespace isolation
  • gRPC control plane, Istio service mesh, canary deployments
  • Skills as code (Anthropic SKILL.md format)
  • Scaffold projects: LangChain, CrewAI, AutoGen, or pure Corail

Ready to govern your agents?

Deploy Récif in minutes. Join the community building the future of autonomous AI infrastructure.