• Blog
  • Docs
  • Pricing
  • We’re hiring!
Log inSign up
drewmcdonald

drewmcdonald

promptCompare

Public
Like
promptCompare
Home
Code
12
.claude
3
.playwright-mcp
1
.zed
1
backend
3
docs
5
frontend
4
shared
1
.gitignore
.mcp.json
.vtignore
CLAUDE.md
deno.json
Environment variables
2
Branches
1
Pull requests
Remixes
History
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Sign up now
Code
/
docs
/
plans
/
2026-02-19-promptcompare-design.md
Code
/
docs
/
plans
/
2026-02-19-promptcompare-design.md
Search
…
Viewing readonly version of main branch: v189
View latest version
2026-02-19-promptcompare-design.md

PromptCompare Design

Overview

A Val Town app for comparing LLM model outputs side-by-side. Users enter a prompt, select 1-4 models from the OpenCode Zen catalog, and see responses stream in parallel columns. Supports multi-turn conversations where each model maintains its own conversation history.

Architecture

Backend (Hono on Val Town)

  • backend/index.ts — Hono app entry point, serves frontend, mounts route modules
  • backend/routes/auth.ts — Password auth via PROMPTCOMPARE_PASSWORD env var, sets HttpOnly session cookie
  • backend/routes/api.ts — REST endpoints for conversations + streaming chat proxy to OpenCode Zen
  • backend/database/migrations.ts — SQLite schema (val-scoped via std/sqlite/main.ts)
  • backend/database/queries.ts — Typed query functions

Frontend (React 18 + Tailwind, static files)

  • frontend/index.html — Shell with CDN imports (React 18.2.0 via esm.sh, Tailwind via twind)
  • frontend/index.tsx — App entry, routing (login vs main)
  • frontend/components/ — Chat UI components inspired by ai-elements patterns

Shared

  • shared/types.ts — TypeScript interfaces shared between frontend and backend

Data Model

conversations ( id TEXT PRIMARY KEY, title TEXT, model_ids TEXT NOT NULL, -- JSON array of model IDs created_at TEXT DEFAULT (datetime('now')), updated_at TEXT DEFAULT (datetime('now')) ) messages ( id TEXT PRIMARY KEY, conversation_id TEXT NOT NULL, role TEXT NOT NULL, -- 'user' content TEXT NOT NULL, created_at TEXT DEFAULT (datetime('now')), FOREIGN KEY (conversation_id) REFERENCES conversations(id) ) responses ( id TEXT PRIMARY KEY, message_id TEXT NOT NULL, model_id TEXT NOT NULL, content TEXT NOT NULL DEFAULT '', created_at TEXT DEFAULT (datetime('now')), FOREIGN KEY (message_id) REFERENCES messages(id) )

API Endpoints

  • POST /api/auth — Login with password, returns session cookie
  • GET /api/auth/check — Check if session is valid
  • POST /api/auth/logout — Clear session cookie
  • GET /api/conversations — List conversations
  • GET /api/conversations/:id — Get conversation with messages and responses
  • POST /api/conversations — Create new conversation (title, model_ids)
  • DELETE /api/conversations/:id — Delete conversation
  • POST /api/chat — Send a message, streams responses from selected models via SSE
  • GET /api/models — Return non-deprecated models from models.json

Streaming Strategy

When a user sends a message:

  1. Backend creates a message row and a response row per model
  2. Backend opens parallel requests to OpenCode Zen for each model
  3. Backend multiplexes responses into a single SSE stream, tagging each chunk with its model_id
  4. Frontend demuxes the SSE stream and updates each column independently
  5. When all models finish, backend updates the response rows with final content

SSE event format:

event: chunk
data: {"model_id": "claude-sonnet-4-6", "content": "Hello"}

event: done
data: {"model_id": "claude-sonnet-4-6"}

event: error
data: {"model_id": "claude-sonnet-4-6", "error": "..."}

Auth

  • Env var: PROMPTCOMPARE_PASSWORD
  • POST /api/auth with { password } body
  • Backend validates, sets HttpOnly cookie with a signed token (or simple hash)
  • All /api/* routes (except auth) check cookie via middleware

UI Layout

+--sidebar--+------------------main-------------------+
| [New Chat] |  Model A       | Model B     | Model C |
|            |  +-response--+  +-response-+  +-------+ |
| Conv 1     |  | streaming |  | streaming|  | ...   | |
| Conv 2     |  | markdown  |  | markdown |  |       | |
| Conv 3     |  +-----------+  +----------+  +-------+ |
|            |                                          |
|            |  +--prompt input bar------------------+  |
|            |  | Type a message...         [Send]   |  |
|            |  +------------------------------------+  |
+------------+------------------------------------------+

Model Selection

  • On new conversation, show a model picker (checkboxes, max 4)
  • Models sourced from models.json, filtered to exclude status: "deprecated"
  • Show model name, family, cost per 1M tokens
  • Selected models persist for the conversation

Key Decisions

  • Single SSE stream (multiplexed) rather than multiple parallel connections — simpler client code
  • Each model maintains its own response history for multi-turn context
  • Val-scoped SQLite (std/sqlite/main.ts) for persistence
  • No build step — all frontend served as raw files via Val Town utilities
FeaturesVersion controlCode intelligenceCLIMCP
Use cases
TeamsAI agentsSlackGTM
DocsShowcaseTemplatesNewestTrendingAPI examplesNPM packages
PricingNewsletterBlogAboutCareers
We’re hiring!
Brandhi@val.townStatus
X (Twitter)
Discord community
GitHub discussions
YouTube channel
Bluesky
Open Source Pledge
Terms of usePrivacy policyAbuse contact
© 2026 Val Town, Inc.