openclaw/docs/providers/ollama.md

172 lines
3.5 KiB
Markdown

---
summary: "Run Clawdbot with Ollama (local LLM runtime)"
read_when:
- You want to run Clawdbot with local models via Ollama
- You need Ollama setup and configuration guidance
---
# Ollama
Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. Clawdbot integrates with Ollama's OpenAI-compatible API and can **auto-discover tool-capable models** when enabled via `OLLAMA_API_KEY` (or an auth profile) and no explicit `models.providers.ollama` config is set.
## Quick start
1) Install Ollama: https://ollama.ai
2) Pull a model:
```bash
ollama pull llama3.3
# or
ollama pull qwen2.5-coder:32b
# or
ollama pull deepseek-r1:32b
```
3) Enable Ollama for Clawdbot (any value works; Ollama doesn't require a real key):
```bash
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or configure in your config file
clawdbot config set models.providers.ollama.apiKey "ollama-local"
```
4) Use Ollama models:
```json5
{
agents: {
defaults: {
model: { primary: "ollama/llama3.3" }
}
}
}
```
## Model Discovery
When Ollama is enabled via `OLLAMA_API_KEY` (or an auth profile) and no explicit `models.providers.ollama` entry exists, Clawdbot automatically detects models installed on your Ollama instance by querying `/api/tags` and `/api/show` at `http://localhost:11434`. It only keeps models that report tool support, so you don't need to manually configure them.
To see what models are available:
```bash
ollama list
clawdbot models list
```
To add a new model, simply pull it with Ollama:
```bash
ollama pull mistral
```
The new model will be automatically discovered and available to use.
If you set `models.providers.ollama` explicitly, auto-discovery is skipped. Define your models manually in that case.
## Configuration
### Basic Setup
The simplest way to enable Ollama is via environment variable:
```bash
export OLLAMA_API_KEY="ollama-local"
```
### Custom Base URL
If Ollama is running on a different host or port (note: explicit config skips auto-discovery, so define models manually):
```json5
{
models: {
providers: {
ollama: {
apiKey: "ollama-local",
baseUrl: "http://192.168.1.100:11434/v1"
}
}
}
}
```
### Model Selection
Once configured, all your Ollama models are available:
```json5
{
agents: {
defaults: {
model: {
primary: "ollama/llama3.3",
fallback: ["ollama/qwen2.5-coder:32b"]
}
}
}
}
```
## Advanced
### Reasoning Models
Models with "r1" or "reasoning" in their name are automatically detected as reasoning models and will use extended thinking features:
```bash
ollama pull deepseek-r1:32b
```
### Model Costs
Ollama is free and runs locally, so all model costs are set to $0.
### Context Windows
Ollama models use default context windows. You can customize these in your provider configuration if needed.
## Troubleshooting
### Ollama not detected
Make sure Ollama is running:
```bash
ollama serve
```
And that the API is accessible:
```bash
curl http://localhost:11434/api/tags
```
### No models available
Pull at least one model:
```bash
ollama list # See what's installed
ollama pull llama3.3 # Pull a model
```
### Connection refused
Check that Ollama is running on the correct port:
```bash
# Check if Ollama is running
ps aux | grep ollama
# Or restart Ollama
ollama serve
```
## See Also
- [Model Providers](/concepts/model-providers) - Overview of all providers
- [Model Selection](/agents/model-selection) - How to choose models
- [Configuration](/configuration) - Full config reference