Compare commits

...

4 Commits

Author SHA1 Message Date
Peter Steinberger
af05c6b4e9 test: restore Ollama provider tests (#1606) (thanks @abhaymundhara) 2026-01-24 22:38:21 +00:00
Abhay
6691e32faf fix: Make Ollama provider opt-in to avoid breaking existing tests
**Root Cause:**
The Ollama provider was being added to ALL configurations by default
(with a fallback API key of 'ollama-local'), which broke tests that
expected NO providers when no API keys were configured.

**Solution:**
- Removed the default fallback API key for Ollama
- Ollama provider now requires explicit configuration via:
  - OLLAMA_API_KEY environment variable, OR
  - Ollama profile in auth store
- Updated documentation to reflect the explicit configuration requirement
- Added a test to verify Ollama is not added by default

This fixes all 4 failing test suites:
- checks (node, test, pnpm test)
- checks (bun, test, bunx vitest run)
- checks-windows (node, test, pnpm test)
- checks-macos (test, pnpm test)

Closes #1531
2026-01-24 22:37:44 +00:00
Abhay
cda6c02e8f test: Temporarily skip Ollama provider tests to diagnose CI failures 2026-01-24 22:37:19 +00:00
Peter Steinberger
8c9d022a88 feat: add Ollama provider discovery parity (#1606) (thanks @abhaymundhara) 2026-01-24 22:35:17 +00:00
17 changed files with 465 additions and 10 deletions

View File

@ -10,6 +10,7 @@ Docs: https://docs.clawd.bot
- Docs: update Fly.io guide notes.
- Docs: add Bedrock EC2 instance role setup + IAM steps. (#1625) Thanks @sergical. https://docs.clawd.bot/bedrock
- Exec approvals: forward approval prompts to chat with `/approve` for all channels (including plugins). (#1621) Thanks @czekaj. https://docs.clawd.bot/tools/exec-approvals https://docs.clawd.bot/tools/slash-commands
- Models: add Ollama provider discovery + docs. (#1606) Thanks @abhaymundhara. https://docs.clawd.bot/providers/ollama
### Fixes
- Web UI: hide internal `message_id` hints in chat bubbles.

View File

@ -236,6 +236,30 @@ MiniMax is configured via `models.providers` because it uses custom endpoints:
See [/providers/minimax](/providers/minimax) for setup details, model options, and config snippets.
### Ollama
Ollama is a local LLM runtime that provides an OpenAI-compatible API:
- Provider: `ollama`
- Auth: `OLLAMA_API_KEY` (any value; Ollama runs locally)
- Example model: `ollama/llama3.3`
- Installation: https://ollama.ai
```bash
# Install Ollama, then pull a model:
ollama pull llama3.3
```
```json5
{
agents: {
defaults: { model: { primary: "ollama/llama3.3" } }
}
}
```
Ollama is auto-discovered when `OLLAMA_API_KEY` (or an auth profile) is set and no explicit `models.providers.ollama` entry exists. Discovery probes `http://127.0.0.1:11434` and filters to tool-capable models. See [/providers/ollama](/providers/ollama) for model recommendations and custom configuration.
### Local proxies (LM Studio, vLLM, LiteLLM, etc.)
Example (OpenAIcompatible):

View File

@ -35,6 +35,7 @@ Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugi
- [Z.AI](/providers/zai)
- [GLM models](/providers/glm)
- [MiniMax](/providers/minimax)
- [Ollama (local models)](/providers/ollama)
## Transcription providers

171
docs/providers/ollama.md Normal file
View File

@ -0,0 +1,171 @@
---
summary: "Run Clawdbot with Ollama (local LLM runtime)"
read_when:
- You want to run Clawdbot with local models via Ollama
- You need Ollama setup and configuration guidance
---
# Ollama
Ollama is a local LLM runtime that makes it easy to run open-source models on your machine. Clawdbot integrates with Ollama's OpenAI-compatible API and can **auto-discover tool-capable models** when enabled via `OLLAMA_API_KEY` (or an auth profile) and no explicit `models.providers.ollama` config is set.
## Quick start
1) Install Ollama: https://ollama.ai
2) Pull a model:
```bash
ollama pull llama3.3
# or
ollama pull qwen2.5-coder:32b
# or
ollama pull deepseek-r1:32b
```
3) Enable Ollama for Clawdbot (any value works; Ollama doesn't require a real key):
```bash
# Set environment variable
export OLLAMA_API_KEY="ollama-local"
# Or configure in your config file
clawdbot config set models.providers.ollama.apiKey "ollama-local"
```
4) Use Ollama models:
```json5
{
agents: {
defaults: {
model: { primary: "ollama/llama3.3" }
}
}
}
```
## Model Discovery
When Ollama is enabled via `OLLAMA_API_KEY` (or an auth profile) and no explicit `models.providers.ollama` entry exists, Clawdbot automatically detects models installed on your Ollama instance by querying `/api/tags` and `/api/show` at `http://localhost:11434`. It only keeps models that report tool support, so you don't need to manually configure them.
To see what models are available:
```bash
ollama list
clawdbot models list
```
To add a new model, simply pull it with Ollama:
```bash
ollama pull mistral
```
The new model will be automatically discovered and available to use.
If you set `models.providers.ollama` explicitly, auto-discovery is skipped. Define your models manually in that case.
## Configuration
### Basic Setup
The simplest way to enable Ollama is via environment variable:
```bash
export OLLAMA_API_KEY="ollama-local"
```
### Custom Base URL
If Ollama is running on a different host or port (note: explicit config skips auto-discovery, so define models manually):
```json5
{
models: {
providers: {
ollama: {
apiKey: "ollama-local",
baseUrl: "http://192.168.1.100:11434/v1"
}
}
}
}
```
### Model Selection
Once configured, all your Ollama models are available:
```json5
{
agents: {
defaults: {
model: {
primary: "ollama/llama3.3",
fallback: ["ollama/qwen2.5-coder:32b"]
}
}
}
}
```
## Advanced
### Reasoning Models
Models with "r1" or "reasoning" in their name are automatically detected as reasoning models and will use extended thinking features:
```bash
ollama pull deepseek-r1:32b
```
### Model Costs
Ollama is free and runs locally, so all model costs are set to $0.
### Context Windows
Ollama models use default context windows. You can customize these in your provider configuration if needed.
## Troubleshooting
### Ollama not detected
Make sure Ollama is running:
```bash
ollama serve
```
And that the API is accessible:
```bash
curl http://localhost:11434/api/tags
```
### No models available
Pull at least one model:
```bash
ollama list # See what's installed
ollama pull llama3.3 # Pull a model
```
### Connection refused
Check that Ollama is running on the correct port:
```bash
# Check if Ollama is running
ps aux | grep ollama
# Or restart Ollama
ollama serve
```
## See Also
- [Model Providers](/concepts/model-providers) - Overview of all providers
- [Model Selection](/agents/model-selection) - How to choose models
- [Configuration](/configuration) - Full config reference

View File

@ -284,6 +284,7 @@ export function resolveEnvApiKey(provider: string): EnvApiKeyResult | null {
synthetic: "SYNTHETIC_API_KEY",
mistral: "MISTRAL_API_KEY",
opencode: "OPENCODE_API_KEY",
ollama: "OLLAMA_API_KEY",
};
const envVar = envMap[normalized];
if (!envVar) return null;

View File

@ -0,0 +1,106 @@
import { afterEach, describe, expect, it, vi } from "vitest";
import { resolveImplicitProviders } from "./models-config.providers.js";
import { mkdtempSync } from "node:fs";
import { join } from "node:path";
import { tmpdir } from "node:os";
describe("Ollama provider", () => {
const previousEnv = { ...process.env };
afterEach(() => {
for (const key of Object.keys(process.env)) {
if (!(key in previousEnv)) delete process.env[key];
}
for (const [key, value] of Object.entries(previousEnv)) {
process.env[key] = value;
}
vi.restoreAllMocks();
vi.unstubAllGlobals();
});
it("should not include ollama when no API key is configured", async () => {
const agentDir = mkdtempSync(join(tmpdir(), "clawd-test-"));
const providers = await resolveImplicitProviders({ agentDir });
// Ollama requires explicit configuration via OLLAMA_API_KEY env var or profile
expect(providers?.ollama).toBeUndefined();
});
it("discovers tool-capable models when OLLAMA_API_KEY is set", async () => {
process.env.OLLAMA_API_KEY = "ollama-local";
delete process.env.VITEST;
process.env.NODE_ENV = "development";
const fetchMock = vi
.fn()
.mockResolvedValueOnce({
ok: true,
status: 200,
json: async () => ({
models: [{ name: "llama3.3" }, { name: "no-tools-model" }],
}),
})
.mockResolvedValueOnce({
ok: true,
status: 200,
json: async () => ({
capabilities: ["tools", "thinking"],
model_info: {
"general.architecture": "llama",
"llama.context_length": "4096",
},
}),
})
.mockResolvedValueOnce({
ok: true,
status: 200,
json: async () => ({
capabilities: ["thinking"],
model_info: {
"general.architecture": "llama",
"llama.context_length": "2048",
},
}),
});
vi.stubGlobal("fetch", fetchMock as unknown as typeof fetch);
const agentDir = mkdtempSync(join(tmpdir(), "clawd-test-"));
const providers = await resolveImplicitProviders({ agentDir });
expect(fetchMock).toHaveBeenCalledTimes(3);
expect(fetchMock.mock.calls[0]?.[0]).toBe("http://127.0.0.1:11434/api/tags");
expect(fetchMock.mock.calls[1]?.[0]).toBe("http://127.0.0.1:11434/api/show");
const provider = providers?.ollama;
expect(provider?.baseUrl).toBe("http://127.0.0.1:11434/v1");
expect(provider?.models).toHaveLength(1);
expect(provider?.models?.[0]?.id).toBe("llama3.3");
expect(provider?.models?.[0]?.reasoning).toBe(true);
expect(provider?.models?.[0]?.contextWindow).toBe(4096);
expect(provider?.models?.[0]?.maxTokens).toBe(4096 * 10);
});
it("skips discovery when ollama is explicitly configured", async () => {
process.env.OLLAMA_API_KEY = "ollama-local";
delete process.env.VITEST;
process.env.NODE_ENV = "development";
const fetchMock = vi.fn();
vi.stubGlobal("fetch", fetchMock as unknown as typeof fetch);
const agentDir = mkdtempSync(join(tmpdir(), "clawd-test-"));
const providers = await resolveImplicitProviders({
agentDir,
explicitProviders: {
ollama: {
baseUrl: "http://example.com/v1",
api: "openai-completions",
models: [],
},
},
});
expect(fetchMock).not.toHaveBeenCalled();
expect(providers?.ollama).toBeUndefined();
});
});

View File

@ -1,9 +1,11 @@
import type { ClawdbotConfig } from "../config/config.js";
import type { ModelDefinitionConfig } from "../config/types.models.js";
import {
DEFAULT_COPILOT_API_BASE_URL,
resolveCopilotApiToken,
} from "../providers/github-copilot-token.js";
import { ensureAuthProfileStore, listProfilesForProvider } from "./auth-profiles.js";
import { normalizeProviderId } from "./model-selection.js";
import { resolveAwsSdkEnvVarName, resolveEnvApiKey } from "./model-auth.js";
import { discoverBedrockModels } from "./bedrock-discovery.js";
import {
@ -62,6 +64,127 @@ const QWEN_PORTAL_DEFAULT_COST = {
cacheWrite: 0,
};
const OLLAMA_HOST_BASE_URL = "http://127.0.0.1:11434";
const OLLAMA_DEFAULT_CONTEXT_WINDOW = 8192;
const OLLAMA_MAX_TOKENS_MULTIPLIER = 10;
const OLLAMA_DISCOVERY_TIMEOUT_MS = 5000;
const OLLAMA_DEFAULT_COST = {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
};
interface OllamaModel {
name: string;
modified_at: string;
size: number;
digest: string;
details?: {
family?: string;
parameter_size?: string;
};
}
interface OllamaTagsResponse {
models: OllamaModel[];
}
interface OllamaShowResponse {
capabilities?: string[];
model_info?: Record<string, string | number>;
}
function parseOllamaNumber(value: unknown): number | undefined {
if (typeof value === "number" && Number.isFinite(value)) return value;
if (typeof value === "string" && value.trim()) {
const parsed = Number(value);
if (Number.isFinite(parsed)) return parsed;
}
return undefined;
}
function resolveOllamaContextWindow(
modelInfo: Record<string, string | number> | undefined,
): number {
if (!modelInfo) return OLLAMA_DEFAULT_CONTEXT_WINDOW;
const architecture = String(modelInfo["general.architecture"] ?? "").trim();
const contextKey = architecture ? `${architecture}.context_length` : "";
const contextWindow =
(contextKey ? parseOllamaNumber(modelInfo[contextKey]) : undefined) ??
parseOllamaNumber(modelInfo["context_length"]);
return contextWindow ?? OLLAMA_DEFAULT_CONTEXT_WINDOW;
}
function normalizeOllamaHostBaseUrl(baseUrl: string): string {
const trimmed = baseUrl.trim().replace(/\/+$/, "");
return trimmed.endsWith("/v1") ? trimmed.slice(0, -3) : trimmed;
}
async function discoverOllamaModels(baseUrl: string): Promise<ModelDefinitionConfig[]> {
// Skip Ollama discovery in test environments
if (process.env.VITEST || process.env.NODE_ENV === "test") {
return [];
}
try {
const response = await fetch(`${baseUrl}/api/tags`, {
signal: AbortSignal.timeout(OLLAMA_DISCOVERY_TIMEOUT_MS),
});
if (!response.ok) {
console.warn(`Failed to discover Ollama models: ${response.status}`);
return [];
}
const data = (await response.json()) as OllamaTagsResponse;
if (!data.models || data.models.length === 0) {
console.warn("No Ollama models found on local instance");
return [];
}
const models = await Promise.all(
data.models.map(async (model) => {
try {
const detailsResponse = await fetch(`${baseUrl}/api/show`, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({ name: model.name }),
signal: AbortSignal.timeout(OLLAMA_DISCOVERY_TIMEOUT_MS),
});
if (!detailsResponse.ok) {
console.warn(
`Failed to fetch Ollama model details for ${model.name}: ${detailsResponse.status}`,
);
return null;
}
const details = (await detailsResponse.json()) as OllamaShowResponse;
const capabilities = Array.isArray(details.capabilities) ? details.capabilities : [];
if (!capabilities.includes("tools")) {
console.debug(`Skipping Ollama model ${model.name}: does not support tools`);
return null;
}
const contextWindow = resolveOllamaContextWindow(details.model_info);
return {
id: model.name,
name: model.name,
reasoning: capabilities.includes("thinking"),
input: ["text"],
cost: OLLAMA_DEFAULT_COST,
contextWindow,
maxTokens: contextWindow * OLLAMA_MAX_TOKENS_MULTIPLIER,
};
} catch (error) {
console.warn(`Failed to fetch Ollama model details for ${model.name}: ${String(error)}`);
return null;
}
}),
);
return models.filter((model): model is ModelDefinitionConfig => Boolean(model));
} catch (error) {
console.warn(`Failed to discover Ollama models: ${String(error)}`);
return [];
}
}
function normalizeApiKeyConfig(value: string): string {
const trimmed = value.trim();
const match = /^\$\{([A-Z0-9_]+)\}$/.exec(trimmed);
@ -275,11 +398,28 @@ function buildSyntheticProvider(): ProviderConfig {
};
}
export function resolveImplicitProviders(params: { agentDir: string }): ModelsConfig["providers"] {
async function buildOllamaProvider(baseUrl: string): Promise<ProviderConfig> {
const hostBaseUrl = normalizeOllamaHostBaseUrl(baseUrl);
const models = await discoverOllamaModels(hostBaseUrl);
return {
baseUrl: `${hostBaseUrl}/v1`,
api: "openai-completions",
models,
};
}
export async function resolveImplicitProviders(params: {
agentDir: string;
explicitProviders?: ModelsConfig["providers"];
}): Promise<ModelsConfig["providers"]> {
const providers: Record<string, ProviderConfig> = {};
const authStore = ensureAuthProfileStore(params.agentDir, {
allowKeychainPrompt: false,
});
const explicitProviders = params.explicitProviders ?? {};
const hasExplicitOllama = Object.keys(explicitProviders).some(
(key) => normalizeProviderId(key) === "ollama",
);
const minimaxKey =
resolveEnvApiKeyVarName("minimax") ??
@ -317,6 +457,14 @@ export function resolveImplicitProviders(params: { agentDir: string }): ModelsCo
};
}
// Ollama provider - only add if explicitly configured
const ollamaKey =
resolveEnvApiKeyVarName("ollama") ??
resolveApiKeyFromProfiles({ provider: "ollama", store: authStore });
if (ollamaKey && !hasExplicitOllama) {
providers.ollama = { ...(await buildOllamaProvider(OLLAMA_HOST_BASE_URL)), apiKey: ollamaKey };
}
return providers;
}

View File

@ -80,7 +80,7 @@ export async function ensureClawdbotModelsJson(
const agentDir = agentDirOverride?.trim() ? agentDirOverride.trim() : resolveClawdbotAgentDir();
const explicitProviders = (cfg.models?.providers ?? {}) as Record<string, ProviderConfig>;
const implicitProviders = resolveImplicitProviders({ agentDir });
const implicitProviders = await resolveImplicitProviders({ agentDir, explicitProviders });
const providers: Record<string, ProviderConfig> = mergeProviders({
implicit: implicitProviders,
explicit: explicitProviders,

View File

@ -72,7 +72,7 @@ const _makeOpenAiConfig = (modelIds: string[]) =>
}) satisfies ClawdbotConfig;
const _ensureModels = (cfg: ClawdbotConfig, agentDir: string) =>
ensureClawdbotModelsJson(cfg, agentDir);
ensureClawdbotModelsJson(cfg, agentDir) as unknown;
const _textFromContent = (content: unknown) => {
if (typeof content === "string") return content;

View File

@ -71,7 +71,7 @@ const _makeOpenAiConfig = (modelIds: string[]) =>
}) satisfies ClawdbotConfig;
const _ensureModels = (cfg: ClawdbotConfig, agentDir: string) =>
ensureClawdbotModelsJson(cfg, agentDir);
ensureClawdbotModelsJson(cfg, agentDir) as unknown;
const _textFromContent = (content: unknown) => {
if (typeof content === "string") return content;

View File

@ -70,7 +70,7 @@ const _makeOpenAiConfig = (modelIds: string[]) =>
}) satisfies ClawdbotConfig;
const _ensureModels = (cfg: ClawdbotConfig, agentDir: string) =>
ensureClawdbotModelsJson(cfg, agentDir);
ensureClawdbotModelsJson(cfg, agentDir) as unknown;
const _textFromContent = (content: unknown) => {
if (typeof content === "string") return content;

View File

@ -70,7 +70,7 @@ const _makeOpenAiConfig = (modelIds: string[]) =>
}) satisfies ClawdbotConfig;
const _ensureModels = (cfg: ClawdbotConfig, agentDir: string) =>
ensureClawdbotModelsJson(cfg, agentDir);
ensureClawdbotModelsJson(cfg, agentDir) as unknown;
const _textFromContent = (content: unknown) => {
if (typeof content === "string") return content;

View File

@ -71,7 +71,7 @@ const _makeOpenAiConfig = (modelIds: string[]) =>
}) satisfies ClawdbotConfig;
const _ensureModels = (cfg: ClawdbotConfig, agentDir: string) =>
ensureClawdbotModelsJson(cfg, agentDir);
ensureClawdbotModelsJson(cfg, agentDir) as unknown;
const _textFromContent = (content: unknown) => {
if (typeof content === "string") return content;

View File

@ -70,7 +70,7 @@ const _makeOpenAiConfig = (modelIds: string[]) =>
}) satisfies ClawdbotConfig;
const _ensureModels = (cfg: ClawdbotConfig, agentDir: string) =>
ensureClawdbotModelsJson(cfg, agentDir);
ensureClawdbotModelsJson(cfg, agentDir) as unknown;
const _textFromContent = (content: unknown) => {
if (typeof content === "string") return content;

View File

@ -71,7 +71,7 @@ const _makeOpenAiConfig = (modelIds: string[]) =>
}) satisfies ClawdbotConfig;
const _ensureModels = (cfg: ClawdbotConfig, agentDir: string) =>
ensureClawdbotModelsJson(cfg, agentDir);
ensureClawdbotModelsJson(cfg, agentDir) as unknown;
const _textFromContent = (content: unknown) => {
if (typeof content === "string") return content;

View File

@ -130,7 +130,7 @@ const makeOpenAiConfig = (modelIds: string[]) =>
},
}) satisfies ClawdbotConfig;
const ensureModels = (cfg: ClawdbotConfig) => ensureClawdbotModelsJson(cfg, agentDir);
const ensureModels = (cfg: ClawdbotConfig) => ensureClawdbotModelsJson(cfg, agentDir) as unknown;
const nextSessionFile = () => {
sessionCounter += 1;

View File

@ -1,5 +1,8 @@
import { afterAll, afterEach, beforeEach, vi } from "vitest";
// Ensure Vitest environment is properly set
process.env.VITEST = "true";
import type {
ChannelId,
ChannelOutboundAdapter,