OpenCode Zen is a list of tested and verified models provided by the OpenCode team.
:::note OpenCode Zen is currently in beta. :::
Zen works like any other provider in OpenCode. You login to OpenCode Zen and get your API key. It's completely optional and you don't need to use it to use OpenCode.
There are a large number of models out there but only a few of these models work well as coding agents. Additionally, most providers are configured very differently; so you get very different performance and quality.
:::tip We tested a select group of models and providers that work well with OpenCode. :::
So if you are using a model through something like OpenRouter, you can never be sure if you are getting the best version of the model you want.
To fix this, we did a couple of things:
OpenCode Zen is an AI gateway that gives you access to these models.
OpenCode Zen works like any other provider in OpenCode.
/connect command in the TUI, select OpenCode Zen, and paste
your API key./models in the TUI to see the list of models we recommend.You are charged per request and you can add credits to your account.
You can also access our models through the following API endpoints.
| Model | Model ID | Endpoint | AI SDK Package |
|---|---|---|---|
| GPT 5.2 | gpt-5.2 | https://opencode.ai/zen/v1/responses | @ai-sdk/openai |
| GPT 5.2 Codex | gpt-5.2-codex | https://opencode.ai/zen/v1/responses | @ai-sdk/openai |
| GPT 5.1 | gpt-5.1 | https://opencode.ai/zen/v1/responses | @ai-sdk/openai |
| GPT 5.1 Codex | gpt-5.1-codex | https://opencode.ai/zen/v1/responses | @ai-sdk/openai |
| GPT 5.1 Codex Max | gpt-5.1-codex-max | https://opencode.ai/zen/v1/responses | @ai-sdk/openai |
| GPT 5.1 Codex Mini | gpt-5.1-codex-mini | https://opencode.ai/zen/v1/responses | @ai-sdk/openai |
| GPT 5 | gpt-5 | https://opencode.ai/zen/v1/responses | @ai-sdk/openai |
| GPT 5 Codex | gpt-5-codex | https://opencode.ai/zen/v1/responses | @ai-sdk/openai |
| GPT 5 Nano | gpt-5-nano | https://opencode.ai/zen/v1/responses | @ai-sdk/openai |
| Claude Opus 4.6 | claude-opus-4-6 | https://opencode.ai/zen/v1/messages | @ai-sdk/anthropic |
| Claude Opus 4.5 | claude-opus-4-5 | https://opencode.ai/zen/v1/messages | @ai-sdk/anthropic |
| Claude Opus 4.1 | claude-opus-4-1 | https://opencode.ai/zen/v1/messages | @ai-sdk/anthropic |
| Claude Sonnet 4.6 | claude-sonnet-4-6 | https://opencode.ai/zen/v1/messages | @ai-sdk/anthropic |
| Claude Sonnet 4.5 | claude-sonnet-4-5 | https://opencode.ai/zen/v1/messages | @ai-sdk/anthropic |
| Claude Sonnet 4 | claude-sonnet-4 | https://opencode.ai/zen/v1/messages | @ai-sdk/anthropic |
| Claude Haiku 4.5 | claude-haiku-4-5 | https://opencode.ai/zen/v1/messages | @ai-sdk/anthropic |
| Claude Haiku 3.5 | claude-3-5-haiku | https://opencode.ai/zen/v1/messages | @ai-sdk/anthropic |
| Gemini 3.1 Pro | gemini-3.1-pro | https://opencode.ai/zen/v1/models/gemini-3.1-pro | @ai-sdk/google |
| Gemini 3 Pro | gemini-3-pro | https://opencode.ai/zen/v1/models/gemini-3-pro | @ai-sdk/google |
| Gemini 3 Flash | gemini-3-flash | https://opencode.ai/zen/v1/models/gemini-3-flash | @ai-sdk/google |
| MiniMax M2.5 | minimax-m2.5 | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| MiniMax M2.5 Free | minimax-m2.5-free | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| MiniMax M2.1 | minimax-m2.1 | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| GLM 5 | glm-5 | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| GLM 5 Free | glm-5-free | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| GLM 4.7 | glm-4.7 | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| GLM 4.6 | glm-4.6 | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| Kimi K2.5 | kimi-k2.5 | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| Kimi K2.5 Free | kimi-k2.5-free | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| Kimi K2 Thinking | kimi-k2-thinking | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| Kimi K2 | kimi-k2 | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| Qwen3 Coder 480B | qwen3-coder | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
| Big Pickle | big-pickle | https://opencode.ai/zen/v1/chat/completions | @ai-sdk/openai-compatible |
The model id in your OpenCode config uses the format
opencode/<model-id>. For example, for GPT 5.2 Codex, you would use
opencode/gpt-5.2-codex in your config.
You can fetch the full list of available models and their metadata from:
https://opencode.ai/zen/v1/models
We support a pay-as-you-go model. Below are the prices per 1M tokens.
| Model | Input | Output | Cached Read | Cached Write |
|---|---|---|---|---|
| Big Pickle | Free | Free | Free | - |
| MiniMax M2.5 Free | Free | Free | Free | - |
| MiniMax M2.5 | $0.30 | $1.20 | $0.06 | - |
| MiniMax M2.1 | $0.30 | $1.20 | $0.10 | - |
| GLM 5 Free | Free | Free | Free | - |
| GLM 5 | $1.00 | $3.20 | $0.20 | - |
| GLM 4.7 | $0.60 | $2.20 | $0.10 | - |
| GLM 4.6 | $0.60 | $2.20 | $0.10 | - |
| Kimi K2.5 Free | Free | Free | Free | - |
| Kimi K2.5 | $0.60 | $3.00 | $0.08 | - |
| Kimi K2 Thinking | $0.40 | $2.50 | - | - |
| Kimi K2 | $0.40 | $2.50 | - | - |
| Qwen3 Coder 480B | $0.45 | $1.50 | - | - |
| Claude Opus 4.6 (≤ 200K tokens) | $5.00 | $25.00 | $0.50 | $6.25 |
| Claude Opus 4.6 (> 200K tokens) | $10.00 | $37.50 | $1.00 | $12.50 |
| Claude Opus 4.5 | $5.00 | $25.00 | $0.50 | $6.25 |
| Claude Opus 4.1 | $15.00 | $75.00 | $1.50 | $18.75 |
| Claude Sonnet 4.6 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 |
| Claude Sonnet 4.6 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 |
| Claude Sonnet 4.5 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 |
| Claude Sonnet 4.5 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 |
| Claude Sonnet 4 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 |
| Claude Sonnet 4 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 |
| Claude Haiku 4.5 | $1.00 | $5.00 | $0.10 | $1.25 |
| Claude Haiku 3.5 | $0.80 | $4.00 | $0.08 | $1.00 |
| Gemini 3.1 Pro (≤ 200K tokens) | $2.00 | $12.00 | $0.20 | - |
| Gemini 3.1 Pro (> 200K tokens) | $4.00 | $18.00 | $0.40 | - |
| Gemini 3 Pro (≤ 200K tokens) | $2.00 | $12.00 | $0.20 | - |
| Gemini 3 Pro (> 200K tokens) | $4.00 | $18.00 | $0.40 | - |
| Gemini 3 Flash | $0.50 | $3.00 | $0.05 | - |
| GPT 5.2 | $1.75 | $14.00 | $0.175 | - |
| GPT 5.2 Codex | $1.75 | $14.00 | $0.175 | - |
| GPT 5.1 | $1.07 | $8.50 | $0.107 | - |
| GPT 5.1 Codex | $1.07 | $8.50 | $0.107 | - |
| GPT 5.1 Codex Max | $1.25 | $10.00 | $0.125 | - |
| GPT 5.1 Codex Mini | $0.25 | $2.00 | $0.025 | - |
| GPT 5 | $1.07 | $8.50 | $0.107 | - |
| GPT 5 Codex | $1.07 | $8.50 | $0.107 | - |
| GPT 5 Nano | Free | Free | Free | - |
You might notice Claude Haiku 3.5 in your usage history. This is a low cost model that's used to generate the titles of your sessions.
:::note Credit card fees are passed along at cost (4.4% + $0.30 per transaction); we don't charge anything beyond that. :::
The free models:
If your balance goes below $5, Zen will automatically reload $20.
You can change the auto-reload amount. You can also disable auto-reload entirely.
You can also set a monthly usage limit for the entire workspace and for each member of your team.
For example, let's say you set a monthly usage limit to $20, Zen will not use more than $20 in a month. But if you have auto-reload enabled, Zen might end up charging you more than $20 if your balance goes below $5.
All our models are hosted in the US. Our providers follow a zero-retention policy and do not use your data for model training, with the following exceptions:
Zen also works great for teams. You can invite teammates, assign roles, curate the models your team uses, and more.
:::note Workspaces are currently free for teams as a part of the beta. :::
Managing your workspace is currently free for teams as a part of the beta. We'll be sharing more details on the pricing soon.
You can invite teammates to your workspace and assign roles:
Admins can also set monthly spending limits for each member to keep costs under control.
Admins can enable or disable specific models for the workspace. Requests made to a disabled model will return an error.
This is useful for cases where you want to disable the use of a model that collects data.
You can use your own OpenAI or Anthropic API keys while still accessing other models in Zen.
When you use your own keys, tokens are billed directly by the provider, not by Zen.
For example, your organization might already have a key for OpenAI or Anthropic and you want to use that instead of the one that Zen provides.
We created OpenCode Zen to: