• Blog
  • Docs
  • Pricing
  • We’re hiring!
Log inSign up
paulkinlan

paulkinlan

eval

Public
Like
eval
Home
Code
2
README.md
H
main.ts
Branches
1
Pull requests
Remixes
History
Environment variables
3
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Sign up now
Code
/
/
x
/
paulkinlan
/
eval
Code
/
/
x
/
paulkinlan
/
eval
Search
README.md

LLM Deck

Compare AI model responses side-by-side in real-time.

Overview

LLM Deck is a web application that lets you send the same prompt to multiple AI models simultaneously and compare their responses in real-time. Perfect for evaluating model capabilities, testing prompts, or exploring different AI behaviors.

Features

  • Multi-Model Comparison: Compare responses from OpenAI, Anthropic, and Google models side-by-side
  • Real-Time Streaming: Watch responses stream in as they're generated
  • Configurable Parameters: Adjust temperature and max tokens per model
  • Markdown Rendering: Responses are rendered with full Markdown support
  • Modern UI: Dark theme with glass morphism design

Supported Models

OpenAI

  • GPT-5.2
  • o3 (Reasoning)
  • o4-mini (Reasoning)
  • GPT-5 Mini

Anthropic

  • Claude 4.5 Opus
  • Claude 4.5 Sonnet
  • Claude 4.5 Haiku

Google

  • Gemini 3 Pro
  • Gemini 3 Flash
  • Gemini 2.5 Pro
  • Gemini 2.5 Flash

Setup

Environment Variables

Set the following environment variables with your API keys:

KeyDescription
OPENAI_API_KEYYour OpenAI API key
ANTHROPIC_API_KEYYour Anthropic API key
GOOGLE_GENERATIVE_AI_API_KEYYour Google AI API key

You only need to set keys for the providers you want to use.

Usage

  1. Open the application in your browser
  2. Add or remove model columns using the "Add Model" button
  3. Configure each column with your desired provider, model, temperature, and max tokens
  4. Enter your prompt in the input area at the bottom
  5. Press Ctrl + Enter or click "Send" to send the prompt to all models
  6. Watch the responses stream in side-by-side

Customization

The application is configurable via the config object in main.ts:

const config = { title: "LLM Deck", subtitle: "Compare AI Models", pageTitle: "LLM Deck β€” Compare AI Models", logo: { imageUrl: "", // Custom logo URL gradient: "from-indigo-500 to-purple-600", shadowColor: "indigo-500/20", }, theme: { primaryGradient: "from-indigo-500 to-purple-600", primaryHoverGradient: "from-indigo-400 to-purple-500", accentColor: "#6366f1", }, inputPlaceholder: "Enter your prompt to compare responses...", defaultColumns: [ { provider: "openai", model: "gpt-5.2" }, { provider: "anthropic", model: "claude-opus-4-5-20251101" }, { provider: "google", model: "gemini-3-pro-preview" }, ], footerText: "", };

Tech Stack

  • Framework: Hono - Lightweight web framework
  • AI SDK: Vercel AI SDK - Unified interface for AI providers
  • Styling: Tailwind CSS - Utility-first CSS
  • Markdown: Marked - Markdown parser and compiler

License

MIT

Code
README.md
H
main.ts
FeaturesVersion controlCode intelligenceCLIMCP
Use cases
TeamsAI agentsSlackGTM
DocsShowcaseTemplatesNewestTrendingAPI examplesNPM packages
PricingNewsletterBlogAboutCareers
We’re hiring!
Brandhi@val.townStatus
X (Twitter)
Discord community
GitHub discussions
YouTube channel
Bluesky
Open Source Pledge
Terms of usePrivacy policyAbuse contact
Β© 2025 Val Town, Inc.