v0.1.0 · Apache 2.0

Search docs...

Helm Values

Complete reference for the Recif Helm chart configuration values.

6 min read

Overview

The Recif Helm chart packages all platform components for one-command installation. Install with:

helm install recif deploy/helm/recif -n recif-system --create-namespace

Override values with -f values-override.yaml or --set key=value.

global

Global settings shared across all subcharts.

KeyDefaultDescription
global.imageTaglatestDefault image tag for all Recif components.
global.imagePullPolicyIfNotPresentKubernetes image pull policy. Options: Always, IfNotPresent, Never.
global.teamNamespaceteam-defaultDefault namespace for agent deployments.

api

Recif API server configuration.

KeyDefaultDescription
api.replicas1Number of API server replicas.
api.imageghcr.io/sciences44/recif-apiAPI server container image.
api.port8080HTTP port for the API server.
api.resources.requests.cpu100mCPU request.
api.resources.requests.memory128MiMemory request.
api.resources.limits.cpu500mCPU limit.
api.resources.limits.memory512MiMemory limit.
api.env.AUTH_ENABLED"false"Enable JWT authentication. Set to "true" for production.
api.env.LOG_LEVEL"info"Log level: debug, info, warn, error.
api.env.LOG_FORMAT"json"Log format: json or text.
api.env.ENV_PROFILE"dev"Environment profile: dev, staging, production.

operator

Recif Kubernetes operator configuration.

KeyDefaultDescription
operator.replicas1Number of operator replicas. Should be 1 (leader election).
operator.imageghcr.io/sciences44/recif-operatorOperator container image.
operator.resources.requests.cpu100mCPU request.
operator.resources.requests.memory128MiMemory request.
operator.resources.limits.cpu500mCPU limit.
operator.resources.limits.memory256MiMemory limit.

dashboard

Recif web dashboard configuration.

KeyDefaultDescription
dashboard.enabledtrueDeploy the dashboard. Set false for headless/API-only mode.
dashboard.replicas1Number of dashboard replicas.
dashboard.imageghcr.io/sciences44/recif-dashboardDashboard container image.
dashboard.port3000HTTP port for the dashboard.
dashboard.apiUrl""Override NEXT_PUBLIC_API_URL. Example: http://recif-api:8080. Auto-detected when empty.
dashboard.resources.requests.cpu50mCPU request.
dashboard.resources.requests.memory64MiMemory request.
dashboard.resources.limits.cpu200mCPU limit.
dashboard.resources.limits.memory256MiMemory limit.

postgresql

PostgreSQL database with pgvector extension.

KeyDefaultDescription
postgresql.enabledtrueDeploy PostgreSQL. Set false to use an external database.
postgresql.imagepgvector/pgvector:pg16PostgreSQL image with pgvector support.
postgresql.storage10GiPersistent volume size.
postgresql.port5432PostgreSQL port.
postgresql.credentials.databaserecifDatabase name.
postgresql.credentials.usernamerecifDatabase username.
postgresql.credentials.passwordrecif_devDatabase password. Override in production.

Warning

The default PostgreSQL password (recif_dev) is for development only. Always set a strong password for staging and production deployments.

corail

Default agent runtime configuration.

KeyDefaultDescription
corail.imageghcr.io/sciences44/corailDefault Corail agent container image.
corail.defaultModelollama/qwen3.5:4bDefault model for new agents (provider/model format).

ollama

Ollama local LLM server (optional).

KeyDefaultDescription
ollama.enabledtrueDeploy Ollama for local model inference.
ollama.imageollama/ollama:latestOllama container image.
ollama.storage20GiPersistent volume size for model storage.
ollama.port11434Ollama API port.
ollama.gpufalseEnable GPU support. Set true for GPU nodes.
ollama.models["qwen3.5:4b", "nomic-embed-text"]Models to pull on startup.

Tip

Setollama.gpu:truewhen running on nodes with NVIDIA GPUs for significantly faster inference. The Helm chart configures the appropriate resource limits and tolerations.

llm

LLM provider API keys. These are injected into the agent-env Secret in agent namespaces.

KeyDefaultDescription
llm.googleApiKey""Google AI API key (simplest setup -- get from aistudio.google.com).
llm.openaiApiKey""OpenAI API key.
llm.anthropicApiKey""Anthropic API key.
llm.awsRegion""AWS region for Bedrock (e.g. us-east-1).
llm.awsAccessKeyId""AWS access key ID for Bedrock.
llm.awsSecretAccessKey""AWS secret access key for Bedrock.
llm.gcp.project""GCP project ID for Vertex AI.
llm.gcp.locationus-central1GCP region for Vertex AI.

Note

You need at minimum ONE provider configured to run agents with real LLMs. For local development, Ollama works with no API key. For the fastest cloud setup, usellm.googleApiKeyfrom Google AI Studio.

Setting LLM keys securely

For production, pass keys via --set or from a separate values file that is not committed to version control:

helm install recif deploy/helm/recif \
  -n recif-system --create-namespace \
  --set llm.openaiApiKey=$OPENAI_API_KEY \
  --set llm.anthropicApiKey=$ANTHROPIC_API_KEY

Or use a secrets file:

helm install recif deploy/helm/recif \
  -n recif-system --create-namespace \
  -f values-secrets.yaml

istio

Istio service mesh configuration.

KeyDefaultDescription
istio.enabledfalseEnable Istio integration for mTLS, traffic splitting, and observability.

When enabled, the Helm chart creates VirtualService and DestinationRule resources for canary traffic splitting.

ingress

Ingress configuration for external access.

KeyDefaultDescription
ingress.enabledfalseEnable Kubernetes Ingress.
ingress.classNamenginxIngress controller class.
ingress.hostrecif.localHostname for the ingress rule.
ingress.tlsfalseEnable TLS termination.

Example: Expose with Ingress

ingress:
  enabled: true
  className: nginx
  host: recif.example.com
  tls: true

Full Override Example

A production-ready values override:

global:
  imageTag: "v0.1.0"
  imagePullPolicy: Always
 
api:
  replicas: 2
  env:
    AUTH_ENABLED: "true"
    LOG_LEVEL: "warn"
    ENV_PROFILE: "production"
 
operator:
  replicas: 1
 
dashboard:
  enabled: true
  replicas: 2
  apiUrl: "https://api.recif.example.com"
 
postgresql:
  enabled: true
  storage: 50Gi
  credentials:
    password: "strong-production-password"
 
corail:
  defaultModel: "openai/gpt-4"
 
ollama:
  enabled: false  # Use cloud providers in production
 
llm:
  openaiApiKey: "${OPENAI_API_KEY}"
  anthropicApiKey: "${ANTHROPIC_API_KEY}"
 
istio:
  enabled: true
 
ingress:
  enabled: true
  className: nginx
  host: recif.example.com
  tls: true