openclaw/docs/providers/ollama.md

3.5 KiB

summary read_when
Run Clawdbot with Ollama (local LLM runtime)
You want to run Clawdbot with local models via Ollama
You need Ollama setup and configuration guidance

Ollama

Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. Clawdbot integrates with Ollama's OpenAI-compatible API and can auto-discover tool-capable models when enabled via OLLAMA_API_KEY (or an auth profile) and no explicit models.providers.ollama config is set.

Quick start

  1. Install Ollama: https://ollama.ai

  2. Pull a model:

ollama pull llama3.3
# or
ollama pull qwen2.5-coder:32b
# or
ollama pull deepseek-r1:32b
  1. Enable Ollama for Clawdbot (any value works; Ollama doesn't require a real key):
# Set environment variable
export OLLAMA_API_KEY="ollama-local"

# Or configure in your config file
clawdbot config set models.providers.ollama.apiKey "ollama-local"
  1. Use Ollama models:
{
  agents: {
    defaults: {
      model: { primary: "ollama/llama3.3" }
    }
  }
}

Model Discovery

When Ollama is enabled via OLLAMA_API_KEY (or an auth profile) and no explicit models.providers.ollama entry exists, Clawdbot automatically detects models installed on your Ollama instance by querying /api/tags and /api/show at http://localhost:11434. It only keeps models that report tool support, so you don't need to manually configure them.

To see what models are available:

ollama list
clawdbot models list

To add a new model, simply pull it with Ollama:

ollama pull mistral

The new model will be automatically discovered and available to use.

If you set models.providers.ollama explicitly, auto-discovery is skipped. Define your models manually in that case.

Configuration

Basic Setup

The simplest way to enable Ollama is via environment variable:

export OLLAMA_API_KEY="ollama-local"

Custom Base URL

If Ollama is running on a different host or port (note: explicit config skips auto-discovery, so define models manually):

{
  models: {
    providers: {
      ollama: {
        apiKey: "ollama-local",
        baseUrl: "http://192.168.1.100:11434/v1"
      }
    }
  }
}

Model Selection

Once configured, all your Ollama models are available:

{
  agents: {
    defaults: {
      model: {
        primary: "ollama/llama3.3",
        fallback: ["ollama/qwen2.5-coder:32b"]
      }
    }
  }
}

Advanced

Reasoning Models

Models with "r1" or "reasoning" in their name are automatically detected as reasoning models and will use extended thinking features:

ollama pull deepseek-r1:32b

Model Costs

Ollama is free and runs locally, so all model costs are set to $0.

Context Windows

Ollama models use default context windows. You can customize these in your provider configuration if needed.

Troubleshooting

Ollama not detected

Make sure Ollama is running:

ollama serve

And that the API is accessible:

curl http://localhost:11434/api/tags

No models available

Pull at least one model:

ollama list  # See what's installed
ollama pull llama3.3  # Pull a model

Connection refused

Check that Ollama is running on the correct port:

# Check if Ollama is running
ps aux | grep ollama

# Or restart Ollama
ollama serve

See Also