• Blog
  • Docs
  • Pricing
  • We’re hiring!
Log inSign up
drewmcdonald

drewmcdonald

promptCompare

Public
Like
promptCompare
Home
Code
11
.claude
3
.playwright-mcp
1
backend
3
docs
5
frontend
4
shared
1
.gitignore
.mcp.json
.vtignore
CLAUDE.md
deno.json
Environment variables
2
Branches
1
Pull requests
Remixes
History
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Sign up now
Code
/
docs
/
plans
/
2026-02-19-vercel-ai-sdk-migration.md
Code
/
docs
/
plans
/
2026-02-19-vercel-ai-sdk-migration.md
Search
…
Viewing readonly version of main branch: v181
View latest version
2026-02-19-vercel-ai-sdk-migration.md

Vercel AI SDK Migration

For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.

Goal: Replace raw Anthropic/OpenAI SDKs with the Vercel AI SDK's unified streamText API, aligned with OpenCode Zen's recommended provider packages.

Architecture: Create 4 provider instances (Anthropic, OpenAI, Google, OpenAI-compatible) each pointing at the Zen gateway. Replace 3 separate streaming generator functions with a single streamText call that picks the right provider based on model.provider.npm. Keep the existing SSE protocol to the frontend unchanged.

Tech Stack: ai@6, @ai-sdk/anthropic@3, @ai-sdk/openai@3, @ai-sdk/google@1, @ai-sdk/openai-compatible@2, Hono SSE streaming


Task 1: Replace SDK imports and create provider instances

Files:

  • Modify: backend/routes/api.ts:1-10 (imports)
  • Modify: backend/routes/api.ts:68-90 (client factories)

Step 1: Replace imports

Remove:

Create val
import Anthropic from "https://esm.sh/@anthropic-ai/sdk@0.39.0"; import OpenAI from "https://esm.sh/openai@4.96.0";

Add:

Create val
import { streamText } from "https://esm.sh/ai@4"; import { createAnthropic } from "https://esm.sh/@ai-sdk/anthropic@1"; import { createOpenAI } from "https://esm.sh/@ai-sdk/openai@1"; import { createGoogleGenerativeAI } from "https://esm.sh/@ai-sdk/google@1"; import { createOpenAICompatible } from "https://esm.sh/@ai-sdk/openai-compatible@1";

Note: use major-version-only pins on esm.sh — it resolves to latest within that major.

Step 2: Replace client factories

Remove getAnthropicClient(), getOpenAIClient(), and their cached variables (lines 68-90).

Add a single getProvider function:

Create val
function getProviderModel(model: any) { const apiKey = Deno.env.get("OPENCODE_API_KEY") || ""; const npm = model.provider?.npm || ""; if (npm === "@ai-sdk/anthropic") { const provider = createAnthropic({ baseURL: "https://opencode.ai/zen/v1", apiKey, }); return provider(model.id); } if (npm === "@ai-sdk/openai") { const provider = createOpenAI({ baseURL: "https://opencode.ai/zen/v1", apiKey, }); return provider(model.id); } if (npm === "@ai-sdk/google") { const provider = createGoogleGenerativeAI({ baseURL: "https://opencode.ai/zen/v1", apiKey, }); return provider(model.id); } // Default: openai-compatible (GLM, Qwen, Kimi, MiniMax, etc.) const provider = createOpenAICompatible({ name: "opencode", baseURL: "https://opencode.ai/zen/v1/chat/completions", apiKey, }); return provider.chatModel(model.id); }

Step 3: Commit

git add backend/routes/api.ts git commit -m "refactor: replace raw SDKs with Vercel AI SDK provider instances"

Task 2: Replace streaming functions with unified streamText

Files:

  • Modify: backend/routes/api.ts:92-208 (streaming functions)

Step 1: Remove old streaming functions

Delete streamAnthropic, streamOpenAIResponses, streamChatCompletions, and streamModel (lines 92-208).

Step 2: Write new unified streamModel

Create val
async function* streamModel( model: any, messages: { role: string; content: string }[], ): AsyncGenerator<StreamEvent> { try { const result = streamText({ model: getProviderModel(model), messages: messages.map((m) => ({ role: m.role as "user" | "assistant" | "system", content: m.content, })), maxTokens: Math.min(4096, model.limit.output), }); let fullContent = ""; for await (const chunk of result.textStream) { fullContent += chunk; yield { type: "chunk", data: { model_id: model.id, content: chunk } }; } yield { type: "done", data: { model_id: model.id, full_content: fullContent }, }; } catch (err: any) { console.error(`[streamModel] ${model.id} error:`, err); yield { type: "error", data: { model_id: model.id, error: err.message } }; } }

This preserves the exact same StreamEvent interface and SSE protocol — no frontend changes needed.

Step 3: Commit

git add backend/routes/api.ts git commit -m "refactor: replace 3 streaming functions with unified streamText"

Task 3: Verify end-to-end

Step 1: Push to Val Town and test

Open the app, create a conversation selecting models from different providers:

  • A Claude model (Anthropic provider)
  • A GPT model (OpenAI provider)
  • A GLM or Kimi model (openai-compatible provider)

Send a message and verify:

  • All models stream text chunks in real-time
  • SSE events parse correctly (no frontend errors in console)
  • Final messages are saved to the database
  • Conversation history works on follow-up messages

Step 2: Check for Google models if any are selected

If Gemini models are in your model list, test one to verify the Google provider path works.

Step 3: Commit any fixes

FeaturesVersion controlCode intelligenceCLIMCP
Use cases
TeamsAI agentsSlackGTM
DocsShowcaseTemplatesNewestTrendingAPI examplesNPM packages
PricingNewsletterBlogAboutCareers
We’re hiring!
Brandhi@val.townStatus
X (Twitter)
Discord community
GitHub discussions
YouTube channel
Bluesky
Open Source Pledge
Terms of usePrivacy policyAbuse contact
© 2026 Val Town, Inc.