CLI Adapters
OpenClaw routes tasks across CC CLI, Gemini, Qwen, and any OpenAI-compatible provider. One config. Automatic failover. Zero lock-in.
Provider Discovery
OpenClaw auto-detects available providers from environment variables:
LLM_BASE_URL + LLM_API_KEY Recommended — access 200+ models ANTHROPIC_API_KEY Direct Claude access OPENAI_API_KEY GPT-4o, o1, o3 GOOGLE_API_KEY Gemini 2.0 Flash/Pro DEEPSEEK_API_KEY Cost-efficient reasoning DASHSCOPE_API_KEY Alibaba models OLLAMA_BASE_URL Local models, offline Check which providers are active:
mekong adapters/list
# ✓ OpenRouter (primary)
# ✓ Anthropic (fallback 1)
# ✓ Ollama (offline fallback)
# ✗ DeepSeek (no API key) Routing Logic
Tasks are routed based on three variables:
export LLM_BASE_URL=https://openrouter.ai/api/v1
export LLM_API_KEY=sk-or-v1-yourkey
export LLM_MODEL=anthropic/claude-sonnet-4 Override per-command with --model:
mekong cook "Write pitch deck" --model google/gemini-2.0-flash
mekong code "Refactor auth module" --model deepseek/deepseek-r1 Configure routing rules in mekong/adapters/llm-providers.yaml:
routing:
coding: anthropic/claude-sonnet-4
writing: google/gemini-2.0-flash
analysis: deepseek/deepseek-r1
default: openai/gpt-4o Automatic Failover
If the primary provider fails (rate limit, outage, timeout), OpenClaw automatically falls back:
Fallback chain:
OPENROUTER_API_KEY → DASHSCOPE_API_KEY → DEEPSEEK_API_KEY
→ ANTHROPIC_API_KEY → OPENAI_API_KEY → GOOGLE_API_KEY
→ OLLAMA_BASE_URL → OfflineProvider Configure failover behavior:
# mekong/adapters/failover.yaml
strategy: sequential # sequential | random | latency-based
timeout_ms: 10000
max_retries: 2
notify_on_failover: true The OfflineProvider runs a minimal Ollama model locally — ensuring tasks never fully fail even without internet.
Swarm Mode
Run multiple providers in parallel and pick the best result:
mekong cook "Generate 5 headline variants" --swarm Swarm mode spawns the task across all available providers simultaneously, then uses a judge model to select the best output. Consumes MCU for each provider used.
Configure swarm strategy:
# mekong/adapters/swarm.yaml
mode: best-of-n # best-of-n | consensus | fastest
n: 3 # number of providers
judge_model: anthropic/claude-sonnet-4 Custom Adapters
Add any OpenAI-compatible API as a custom adapter:
mekong adapters/add \
--name "my-llm" \
--base-url https://api.my-llm.com/v1 \
--api-key $MY_LLM_KEY \
--model my-model-v1 Or create a config file at mekong/adapters/custom/my-llm.yaml:
name: my-llm
base_url: https://api.my-llm.com/v1
api_key_env: MY_LLM_KEY
model: my-model-v1
context_window: 128000
supports_tools: true Custom adapters participate in failover and swarm routing automatically.