Skip to content

Model Configuration - Multi-Provider & Failover

What You'll Learn

After completing this lesson, you will be able to:

  • Configure multiple AI model providers
  • Set up model failover strategies
  • Customize model parameters and compatibility settings
  • Optimize the cost-performance balance

Core Concepts

OpenClaw supports multiple model providers and automatically switches to backup models when the primary model becomes unavailable, ensuring high service availability.

┌─────────────────────────────────────────────────────────────┐
│              Multi-Model Provider Architecture              │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│    User Request                                             │
│       │                                                     │
│       ▼                                                     │
│   ┌─────────────────────────────────────────────────┐      │
│   │              Model Selector                      │      │
│   │   ┌──────────────┐  ┌──────────────┐           │      │
│   │   │ Primary      │  │ Backup       │           │      │
│   │   │ Model        │  │ Model        │           │      │
│   │   │ Claude       │  │ GPT-4        │           │      │
│   │   └──────┬───────┘  └──────┬───────┘           │      │
│   │          │                 │                   │      │
│   │          ▼                 ▼                   │      │
│   │   ┌─────────────────────────────────────────┐  │      │
│   │   │ Failure Detection → Auto Switch →       │  │      │
│   │   │ Degradation Handling                    │  │      │
│   │   └─────────────────────────────────────────┘  │      │
│   └─────────────────────────────────────────────────┘      │
│                          │                                  │
│                          ▼                                  │
│   ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐   │
│   │Anthropic │  │ OpenAI   │  │  vLLM    │  │ Bedrock  │   │
│   │  API     │  │  API     │  │  Local   │  │  AWS     │   │
│   └──────────┘  └──────────┘  └──────────┘  └──────────┘   │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Supported Model Providers

ProviderAPI TypeFeaturesBest For
Anthropicanthropic-messagesClaude series, long contextComplex reasoning, code generation
OpenAIopenai-responsesGPT-4/3.5, feature-richGeneral tasks, quick responses
vLLMopenai-completionsLocal deployment, privacy-firstData-sensitive, offline environments
AWS Bedrockbedrock-converse-streamEnterprise-grade, multi-modelEnterprise compliance, multi-model
GitHub Copilotgithub-copilotIDE integrationCode completion
Google Geminigoogle-generative-aiMultimodalImage understanding

Let's Do This

Step 1: Configure Primary Model Provider

Why
Specify the default AI model to use.

bash
# Configure Anthropic Claude
openclaw config set agents.defaults.model.provider anthropic
openclaw config set agents.defaults.model.id claude-3-5-sonnet-20241022
openclaw config set models.providers.anthropic.apiKey "sk-ant-..."

# Or configure OpenAI
openclaw config set agents.defaults.model.provider openai
openclaw config set agents.defaults.model.id gpt-4o
openclaw config set models.providers.openai.apiKey "sk-..."

Model Configuration Type Definition (src/config/types.models.ts)

typescript
type ModelDefinitionConfig = {
  id: string;                    // Model ID
  name: string;                  // Display name
  api?: ModelApi;                // API type
  reasoning: boolean;            // Supports reasoning
  input: Array<"text" | "image">; // Supported input types
  cost: {
    input: number;               // Input token cost ($/1M)
    output: number;              // Output token cost ($/1M)
    cacheRead: number;           // Cache read cost
    cacheWrite: number;          // Cache write cost
  };
  contextWindow: number;         // Context window size
  maxTokens: number;             // Maximum output tokens
};

Step 2: Add Backup Models

Why
Automatically switch to backup models when the primary model is unavailable.

bash
# Configure backup model list
openclaw config set agents.defaults.fallbacks.0.provider openai
openclaw config set agents.defaults.fallbacks.0.model gpt-4o

openclaw config set agents.defaults.fallbacks.1.provider anthropic
openclaw config set agents.defaults.fallbacks.1.model claude-3-haiku-20240307

Failover Configuration Example

json
{
  "agents": {
    "defaults": {
      "model": {
        "provider": "anthropic",
        "id": "claude-3-5-sonnet-20241022"
      },
      "fallbacks": [
        {
          "provider": "openai",
          "model": "gpt-4o"
        },
        {
          "provider": "anthropic",
          "model": "claude-3-haiku-20240307"
        }
      ]
    }
  }
}

Step 3: Configure vLLM Local Models

Why
Local deployment protects data privacy and works without internet connectivity.

bash
# Configure vLLM provider
openclaw config set models.providers.vllm.baseUrl "http://localhost:8000/v1"
openclaw config set models.providers.vllm.api "openai-completions"

# Add local model definition
openclaw config set models.providers.vllm.models.0.id "llama-3-8b"
openclaw config set models.providers.vllm.models.0.name "Llama 3 8B"
openclaw config set models.providers.vllm.models.0.reasoning true
openclaw config set models.providers.vllm.models.0.contextWindow 8192
openclaw config set models.providers.vllm.models.0.maxTokens 4096

Complete vLLM Configuration

json
{
  "models": {
    "providers": {
      "vllm": {
        "baseUrl": "http://localhost:8000/v1",
        "api": "openai-completions",
        "models": [
          {
            "id": "llama-3-8b",
            "name": "Llama 3 8B (Local)",
            "api": "openai-completions",
            "reasoning": true,
            "input": ["text"],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 8192,
            "maxTokens": 4096
          }
        ]
      }
    }
  }
}

Step 4: Configure AWS Bedrock

Why
AWS Bedrock provides enterprise-grade multi-model access.

bash
# Configure Bedrock discovery
openclaw config set models.bedrockDiscovery.enabled true
openclaw config set models.bedrockDiscovery.region us-east-1
openclaw config set models.bedrockDiscovery.providerFilter "["anthropic","meta"]"

# Or manually configure Bedrock
openclaw config set models.providers.bedrock.auth aws-sdk
openclaw config set models.providers.bedrock.baseUrl "https://bedrock-runtime.us-east-1.amazonaws.com"

Bedrock Auto-Discovery Configuration

json
{
  "models": {
    "bedrockDiscovery": {
      "enabled": true,
      "region": "us-east-1",
      "providerFilter": ["anthropic", "meta", "amazon"],
      "refreshInterval": 86400,
      "defaultContextWindow": 200000,
      "defaultMaxTokens": 4096
    }
  }
}

Step 5: Configure Model Allowlist

Why
Limit the models users can access to control costs.

bash
# Configure model allowlist
openclaw config set agents.defaults.models.claude-3-5-sonnet-20241022.enabled true
openclaw config set agents.defaults.models.gpt-4o.enabled true
openclaw config set agents.defaults.models.claude-3-haiku-20240307.enabled true

Allowlist Configuration Example

json
{
  "agents": {
    "defaults": {
      "models": {
        "claude-3-5-sonnet-20241022": {
          "enabled": true,
          "maxCostPerDay": 10.0
        },
        "gpt-4o": {
          "enabled": true,
          "maxCostPerDay": 5.0
        },
        "claude-3-haiku-20240307": {
          "enabled": true,
          "maxCostPerDay": 2.0
        }
      }
    }
  }
}

Step 6: Configure Model Compatibility

Why
Different models have different feature support and require compatibility configuration.

json
{
  "models": {
    "providers": {
      "custom-provider": {
        "baseUrl": "https://api.custom.com",
        "headers": {
          "X-Custom-Header": "value"
        },
        "models": [
          {
            "id": "custom-model",
            "name": "Custom Model",
            "api": "openai-completions",
            "reasoning": true,
            "input": ["text", "image"],
            "compat": {
              "supportsStore": false,
              "supportsDeveloperRole": true,
              "supportsReasoningEffort": true,
              "maxTokensField": "max_completion_tokens"
            },
            "cost": {
              "input": 1.0,
              "output": 3.0,
              "cacheRead": 0.1,
              "cacheWrite": 1.25
            },
            "contextWindow": 128000,
            "maxTokens": 4096
          }
        ]
      }
    }
  }
}

Checkpoint ✅

Verify model configuration:

bash
# List available models
openclaw models list

# Expected output
┌────────────────────────────────────────────────────────────┐
  Available Models
├────────────────────────────────────────────────────────────┤
  anthropic/claude-3-5-sonnet-20241022 Active
  anthropic/claude-3-opus-20240229 Active
  anthropic/claude-3-haiku-20240307 Active
  openai/gpt-4o Active
  openai/gpt-4o-mini Active
  vllm/llama-3-8b Active
└────────────────────────────────────────────────────────────┘

# Test model connection
openclaw models test anthropic/claude-3-5-sonnet-20241022

Common Pitfalls

Common Configuration Issues

  1. Incorrect Model ID
    Symptom: Model not found
    Solution: Check the model ID spelling, use openclaw models list to view available models

  2. Invalid API Key
    Symptom: Authentication failed
    Solution: Verify the API key is correct, check for extra whitespace

  3. Context Limit Exceeded
    Symptom: Context window exceeded
    Solution: Reduce contextTokens configuration, or enable context compression

  4. Failover Not Working
    Symptom: No response when primary model fails
    Solution: Check fallbacks configuration format, ensure backup models are available

  5. Insufficient Bedrock Permissions
    Symptom: Access denied
    Solution: Check AWS IAM permissions, ensure you have bedrock:InvokeModel permission

Cost Optimization Recommendations

Model Selection Strategy

Task TypeRecommended ModelCost IndexQuality
Simple Q&AClaude Haiku / GPT-4o-mini⭐⭐
General TasksClaude Sonnet / GPT-4o⭐⭐⭐⭐⭐⭐⭐
Complex ReasoningClaude Opus / GPT-4o⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Code GenerationClaude Sonnet⭐⭐⭐⭐⭐⭐⭐⭐

Caching Strategy

json
{
  "agents": {
    "defaults": {
      "cache": {
        "enabled": true,
        "ttl": 3600,
        "maxSize": 1000
      }
    }
  }
}

Lesson Summary

In this lesson, you learned:

  • ✅ How to configure multiple model providers
  • ✅ How to set up model failover strategies
  • ✅ How to configure local vLLM models
  • ✅ AWS Bedrock auto-discovery
  • ✅ Model allowlist and cost control
  • ✅ Model compatibility configuration
  • ✅ Cost optimization strategies

Next Lesson Preview

In the next lesson, we'll cover Browser Control.

You'll learn:

  • How to automate Chrome/Chromium
  • CDP protocol basics
  • Browser configuration and security settings

Appendix: Source Code Reference

Click to expand source code locations

Last updated: 2026-02-14

FeatureFile PathLine Numbers
Model Configuration Typessrc/config/types.models.ts1-60
Model Selection Logicsrc/agents/model-selection.ts-
Model Failoversrc/agents/model-fallback.ts-
Model Catalog Loadingsrc/agents/model-catalog.ts-
Model Commandssrc/commands/models.ts-

Supported API Types:

  • openai-completions: OpenAI-compatible API
  • openai-responses: OpenAI Responses API
  • anthropic-messages: Anthropic Messages API
  • google-generative-ai: Google Gemini API
  • github-copilot: GitHub Copilot API
  • bedrock-converse-stream: AWS Bedrock API