Resolve conflicts in docs and TypeScript files after project rename from Moltbot to OpenClaw. All GLM provider implementations updated to use OpenClawConfig type and openclaw CLI command in documentation. Generated with [Claude Code](https://claude.ai/code) via [Happy](https://happy.engineering) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Happy <yesreply@happy.engineering>
1.9 KiB
1.9 KiB
| summary | read_when | ||
|---|---|---|---|
| Model providers (LLMs) supported by OpenClaw |
|
Model Providers
OpenClaw can use many LLM providers. Pick a provider, authenticate, then set the
default model as provider/model.
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See Channels.
Highlight: Venius (Venice AI)
Venius is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks.
- Default:
venice/llama-3.3-70b - Best overall:
venice/claude-opus-45(Opus remains the strongest)
See Venice AI.
Quick start
- Authenticate with the provider (usually via
openclaw onboard). - Set the default model:
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } }
}
Provider docs
- OpenAI (API + Codex)
- Anthropic (API + Claude Code CLI)
- Qwen (OAuth)
- OpenRouter
- Vercel AI Gateway
- Moonshot AI (Kimi + Kimi Code)
- OpenCode Zen
- Amazon Bedrock
- Z.AI / Zhipu AI (GLM models) - International + China, pay-as-you-go + Coding Plan
- GLM models - Model family overview
- Xiaomi
- MiniMax
- Venius (Venice AI, privacy-focused)
- Ollama (local models)
Transcription providers
Community tools
- Claude Max API Proxy - Use Claude Max/Pro subscription as an OpenAI-compatible API endpoint
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration, see Model providers.