Resolve conflicts with Xiaomi MiMo provider addition (#3454). Both PRs added new providers to the same infrastructure files. Conflicts resolved by including both sets of changes: - docs/providers/index.md: Add Xiaomi link alongside GLM docs - src/cli/program/register.onboard.ts: Add all CLI flags - src/commands/auth-choice-options.ts: Add all auth choices and groups - src/commands/auth-choice.apply.api-providers.ts: Add all handlers - src/commands/onboard-auth.config-core.ts: Add all config imports - src/commands/onboard-auth.credentials.ts: Add all credentials functions - src/commands/onboard-auth.ts: Add all exports - src/commands/onboard-non-interactive/local/auth-choice.ts: Add all handlers - src/commands/onboard-types.ts: Add all types - src/infra/provider-usage.shared.ts: Add all provider labels - src/infra/provider-usage.types.ts: Add all provider IDs Generated with [Claude Code](https://claude.ai/code) via [Happy](https://happy.engineering) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Happy <yesreply@happy.engineering>
1.9 KiB
1.9 KiB
| summary | read_when | ||
|---|---|---|---|
| Model providers (LLMs) supported by Moltbot |
|
Model Providers
Moltbot can use many LLM providers. Pick a provider, authenticate, then set the
default model as provider/model.
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See Channels.
Highlight: Venius (Venice AI)
Venius is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks.
- Default:
venice/llama-3.3-70b - Best overall:
venice/claude-opus-45(Opus remains the strongest)
See Venice AI.
Quick start
- Authenticate with the provider (usually via
moltbot onboard). - Set the default model:
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-5" } } }
}
Provider docs
- OpenAI (API + Codex)
- Anthropic (API + Claude Code CLI)
- Qwen (OAuth)
- OpenRouter
- Vercel AI Gateway
- Moonshot AI (Kimi + Kimi Code)
- OpenCode Zen
- Amazon Bedrock
- Z.AI / Zhipu AI (GLM models) - International + China, pay-as-you-go + Coding Plan
- GLM models - Model family overview
- Xiaomi
- MiniMax
- Venius (Venice AI, privacy-focused)
- Ollama (local models)
Transcription providers
Community tools
- Claude Max API Proxy - Use Claude Max/Pro subscription as an OpenAI-compatible API endpoint
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration, see Model providers.