For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
Goal: Replace raw Anthropic/OpenAI SDKs with the Vercel AI SDK's unified
streamText API, aligned with OpenCode Zen's recommended provider packages.
Architecture: Create 4 provider instances (Anthropic, OpenAI, Google,
OpenAI-compatible) each pointing at the Zen gateway. Replace 3 separate
streaming generator functions with a single streamText call that picks the
right provider based on model.provider.npm. Keep the existing SSE protocol to
the frontend unchanged.
Tech Stack: ai@4, @ai-sdk/anthropic@1, @ai-sdk/openai@1,
@ai-sdk/google@1, Hono SSE streaming (note: @ai-sdk/openai-compatible
dropped due to esm.sh/Deno zod 4 incompatibility — openai-compatible models use
createOpenAI with provider.chat() instead)
Files:
backend/routes/api.ts:1-10 (imports)backend/routes/api.ts:68-90 (client factories)Step 1: Replace imports
Remove:
import Anthropic from "https://esm.sh/@anthropic-ai/sdk@0.39.0";
import OpenAI from "https://esm.sh/openai@4.96.0";
Add:
import { streamText } from "https://esm.sh/ai@4";
import { createAnthropic } from "https://esm.sh/@ai-sdk/anthropic@1";
import { createOpenAI } from "https://esm.sh/@ai-sdk/openai@1";
import { createGoogleGenerativeAI } from "https://esm.sh/@ai-sdk/google@1";
import { createOpenAICompatible } from "https://esm.sh/@ai-sdk/openai-compatible@1";
Note: use major-version-only pins on esm.sh — it resolves to latest within that major.
Step 2: Replace client factories
Remove getAnthropicClient(), getOpenAIClient(), and their cached variables
(lines 68-90).
Add a single getProvider function:
function getProviderModel(model: any) {
const apiKey = Deno.env.get("OPENCODE_API_KEY") || "";
const npm = model.provider?.npm || "";
if (npm === "@ai-sdk/anthropic") {
const provider = createAnthropic({
baseURL: "https://opencode.ai/zen/v1",
apiKey,
});
return provider(model.id);
}
if (npm === "@ai-sdk/openai") {
const provider = createOpenAI({
baseURL: "https://opencode.ai/zen/v1",
apiKey,
});
return provider(model.id);
}
if (npm === "@ai-sdk/google") {
const provider = createGoogleGenerativeAI({
baseURL: "https://opencode.ai/zen/v1",
apiKey,
});
return provider(model.id);
}
// Default: openai-compatible (GLM, Qwen, Kimi, MiniMax, etc.)
const provider = createOpenAICompatible({
name: "opencode",
baseURL: "https://opencode.ai/zen/v1/chat/completions",
apiKey,
});
return provider.chatModel(model.id);
}
Step 3: Commit
git add backend/routes/api.ts git commit -m "refactor: replace raw SDKs with Vercel AI SDK provider instances"
Files:
backend/routes/api.ts:92-208 (streaming functions)Step 1: Remove old streaming functions
Delete streamAnthropic, streamOpenAIResponses, streamChatCompletions, and
streamModel (lines 92-208).
Step 2: Write new unified streamModel
async function* streamModel(
model: any,
messages: { role: string; content: string }[],
): AsyncGenerator<StreamEvent> {
try {
const result = streamText({
model: getProviderModel(model),
messages: messages.map((m) => ({
role: m.role as "user" | "assistant" | "system",
content: m.content,
})),
maxTokens: Math.min(4096, model.limit.output),
});
let fullContent = "";
for await (const chunk of result.textStream) {
fullContent += chunk;
yield { type: "chunk", data: { model_id: model.id, content: chunk } };
}
yield {
type: "done",
data: { model_id: model.id, full_content: fullContent },
};
} catch (err: any) {
console.error(`[streamModel] ${model.id} error:`, err);
yield { type: "error", data: { model_id: model.id, error: err.message } };
}
}
This preserves the exact same StreamEvent interface and SSE protocol — no
frontend changes needed.
Step 3: Commit
git add backend/routes/api.ts git commit -m "refactor: replace 3 streaming functions with unified streamText"
Step 1: Push to Val Town and test
Open the app, create a conversation selecting models from different providers:
Send a message and verify:
Step 2: Check for Google models if any are selected
If Gemini models are in your model list, test one to verify the Google provider path works.
Step 3: Commit any fixes