| title: | Introducing Val Town MCP |
|---|---|
| description: | Bring Val Town to your favorite LLM |
| pubDate: | 2025-11-14:00:00.000Z |
| author: | Steve Krouse |
On Val Town, you can deploy TypeScript apps in 100ms. Now with the Val Town MCP server, you can do that from Claude, ChatGPT, Cursor, VSCode, or wherever you do your AI coding. Try it!
If you've been following my tweets recently – "I've gotta rant about LLMs, MCP, and tool-calling for a second", "MCP is mostly nonsense", "MCP is overhyped" – you might be surprised by this announcement. Well, how did you think I got those salty takes except by building an MCP server?
Yes, I think MCP is dubious as a protocol. But for now, MCP is the right way for Val Town to meet developers where they are. In Cursor or Claude Code or Zed or wherever. Watch us make a CRUD blog app with Claude Code in a couple prompts.
This is:
- Cheaper: Don't pay us for credits. Pay the inference provider directly.
- Better: Use whatever state-of-the art LLM you want. We at Val Town don't have to build it
- You get the best parts of Val Town – instant deployments, built-in SQLite, etc – in your favorite LLM coding tool.
We have guides for some of the popular LLMs—Claude Code, ChatGPT, Claude Web/Desktop—but it should work with any MCP client. If you'd like a hand with setup, ask in our discord server or send us an email.
MCP is not perfect (again, see tweets), and the whiplash from AI tooling is real, but MCP has a few things going for it:
- Meet developers where they are
- Write once, integrate everywhere
- Ship faster
Instead of fast-following all the best AI coding assistants with Townie, Val Town MCP can be used with the latest and greatest LLM of your choosing.
MCP also allows us to ship faster: whereas traditional APIs require careful version control for backward compatibility to prevent breaking changes, an MCP spec can change continuously because LLMs read the spec and run inference at runtime.
There's a common thread running through every AI feature we've implemented
(e.g., Townie, the
vt CLI, and now MCP). On Val Town, whatever
you and your LLM are coding is live immediately. It's the idea from Bret
Victor's Inventing on Principle talk:
Creators need an immediate connection to what they're creating...when you're making something, if you make a change, or you make a decision, you need to see the effect of that immediately.
When you (or your LLM) make an edit, your code is deployed on Val Town immediately. And that goes for all code on Val Town: all code is deployed by default, either on prod or in a branch—not locally on your machine.
Val Town isn't an AI company, but this always-deployed model works quite well with LLMs. Just give your favorite LLM a branch, and the code it writes will be alive by default.
Bring Val Town MCP to your favorite LLM, and let us know what you think.