Model-agnostic MCP router for the Big Three: OpenAI, Claude, and Gemini.
One API, one UI — point any MCP server at any frontier model. Switch between GPT-5, Claude Opus 4, and Gemini 2.5 Pro without changing your tool integrations.
┌─────────────┐ ┌──────────────────────┐ ┌───────────────┐
│ Chat UI │────▶│ POST /api/chat │────▶│ MCP Servers │
│ (browser) │◀────│ Model Router │◀────│ (any remote) │
└─────────────┘ └──────────────────────┘ └───────────────┘
│
┌──────┼──────┐
▼ ▼ ▼
┌────────┐ ┌─────┐ ┌────────┐
│ OpenAI │ │Claude│ │ Gemini │
│Response│ │ Msg │ │ GenAI │
│ API │ │ API │ │ SDK │
└────────┘ └─────┘ └────────┘
| Provider | API | MCP Strategy | How It Works |
|---|---|---|---|
| OpenAI | Responses API | Server-side | Pass type: "mcp" + server URL → OpenAI handles everything |
| Anthropic | Messages API | Client-side | Connect to MCP, discover tools, agentic loop (tool_use → callTool → tool_result) |
| GenAI SDK | Client-side | Connect to MCP, discover tools, agentic loop (functionCall → callTool → functionResponse) |
The router detects the provider from the model string:
gpt-*, o3*, o4* → OpenAIclaude-* → Anthropicgemini-* → GoogleOr use explicit prefixes: openai/gpt-4.1, anthropic/claude-sonnet-4-5, google/gemini-2.5-flash
Add environment variables for the providers you want to use:
| Key | Provider | Get it from |
|---|---|---|
OPENAI_API_KEY | OpenAI | platform.openai.com |
ANTHROPIC_API_KEY | Anthropic | console.anthropic.com |
GOOGLE_API_KEY | aistudio.google.com |
You only need keys for providers you plan to use. The UI shows which are configured.
In the sidebar, add any remote MCP server URL. Examples:
https://www.val.town/mcphttps://mcp.notion.com/mcpPick a model, type a message. The router handles the rest.
{ "model": "gpt-4.1-mini", "messages": [ { "role": "user", "content": "What tools do you have?" } ], "mcpServers": [ { "url": "https://mcp.example.com/sse", "label": "my-server" } ], "system": "You are a helpful assistant." }
Response (same shape for all providers):
{ "content": "I have access to the following tools...", "toolCalls": [ { "name": "list_tools", "serverLabel": "my-server", "arguments": "{}", "output": "..." } ], "provider": "openai", "model": "gpt-4.1-mini", "usage": { "inputTokens": 150, "outputTokens": 89 } }
Returns available models and which API keys are configured.
gpt-5 — GPT-5 flagshipgpt-4.1 / gpt-4.1-mini / gpt-4.1-nano — GPT-4.1 familyo3 / o4-mini — Reasoning modelsclaude-opus-4-6 — Claude Opus 4claude-sonnet-4-5-20250929 — Claude Sonnet 4.5claude-haiku-3-5-20241022 — Claude Haiku 3.5gemini-2.5-pro — Gemini 2.5 Progemini-2.5-flash — Gemini 2.5 Flashgemini-2.0-flash — Gemini 2.0 FlashOpenAI is the simplest: their Responses API handles MCP server-side. You pass a URL, they call the MCP server, execute tools, and return results. Zero orchestration code needed.
Anthropic and Google require client-side MCP orchestration. The router connects to MCP servers, discovers tools, converts schemas to each provider's format, then runs an agentic loop — sending messages, catching tool calls, executing them against MCP servers, and feeding results back until the model is done.
openai npm package — Responses API@anthropic-ai/sdk — Messages API@google/genai — GenAI SDK@modelcontextprotocol/sdk — Streamable HTTP clientBuilt on Val Town. March 2026.