multiplayer-prompting
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Viewing readonly version of main branch: v9View latest version
┌─────────────────────────────────────────────────────────────┐
│ Val Town │
│ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Frontend (React) │ │
│ │ ┌─────────┐ ┌──────────┐ ┌────────────┐ │ │
│ │ │ CardDeck│ │PromptStack│ │VibeOutput │ │ │
│ │ └────┬────┘ └─────┬────┘ └──────┬─────┘ │ │
│ │ │ │ │ │ │
│ │ └─────────────┴───────────────┘ │ │
│ │ │ │ │
│ │ ┌──────▼──────┐ │ │
│ │ │ App.tsx │ │ │
│ │ └──────┬──────┘ │ │
│ └─────────────────────┼──────────────────────────────┘ │
│ │ │
│ SSE │ HTTP │
│ │ │
│ ┌─────────────────────▼──────────────────────────────┐ │
│ │ Backend (Hono) │ │
│ │ │ │
│ │ ┌─────────┐ ┌──────────┐ ┌───────────────┐ │ │
│ │ │SSE Hub │ │ API Routes│ │ Vibe Engine │ │ │
│ │ │broadcast│ │ /api/* │ │ (OpenAI) │ │ │
│ │ └────┬────┘ └─────┬─────┘ └───────┬───────┘ │ │
│ │ │ │ │ │ │
│ │ └─────────────┴─────────────────┘ │ │
│ │ │ │ │
│ │ ┌──────▼────────┐ │ │
│ │ │ Database │ │ │
│ │ │ (SQLite) │ │ │
│ │ └────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Browser → POST /api/session/:id/join
↓
Create/Find Session in SQLite
↓
Add Player with Turn Order
↓
Broadcast "player-joined" via SSE
↓
Return Session + Player Data
Browser → POST /api/session/:id/play
↓
Insert Card into prompt_stack Table
↓
Update current_turn in Session
↓
Broadcast "card-played" via SSE
↓
All Clients Update UI
Browser → POST /api/session/:id/generate
↓
Fetch All Cards from prompt_stack
↓
Vibe Engine Synthesizes Prompt
↓
Call OpenAI API
↓
Save Result to vibe_results Table
↓
Broadcast "vibe-update" via SSE
↓
All Clients Show Response
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Client 1 │ │ Client 2 │ │ Client 3 │
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
│ EventSource │ EventSource │ EventSource
├────────────────┼────────────────┤
│ │ │
└────────────────┴────────────────┘
│
┌───────▼────────┐
│ SSE Hub │
│ (Maintains │
│ connections) │
└───────┬────────┘
│
broadcast(message)
│
┌────────────────┴────────────────┐
│ │
▼ ▼
┌─────────────┐ ┌─────────────┐
│ card-played│ │ turn-changed│
│ vibe-update │ │player-joined│
│ player-left │ │ etc. │
└─────────────┘ └─────────────┘
sessions_v1
├── id (PK)
├── created_at
├── current_turn
├── max_players
└── status
players_v1
├── id (PK)
├── session_id (FK)
├── name
├── avatar
├── is_active
├── turn_order
└── last_seen
prompt_stack_v1
├── id (PK)
├── session_id (FK)
├── player_name
├── card_type
├── content
├── image_url
└── timestamp
vibe_results_v1
├── id (PK)
├── session_id (FK)
├── prompt
├── response
├── token_count
└── timestamp
Session: current_turn = 0
Players: [Alice, Bob, Charlie] (turn_order: 0, 1, 2)
Turn 0: Alice plays → current_turn = 1
Turn 1: Bob plays → current_turn = 2
Turn 2: Charlie plays → current_turn = 0 (wraps around)
Turn 0: Alice again → ...
Stack: [
{type: "role", content: "Act as Rust expert"},
{type: "tone", content: "ELI5"},
{type: "token-limit", content: "50 tokens"}
]
Vibe Engine:
1. Extract role → "Act as Rust expert"
2. Extract tone → "ELI5"
3. Extract token limit → 50
4. Build prompt:
"""
Act as Rust expert
ELI5
IMPORTANT: Keep your response under 50 tokens.
"""
5. Call OpenAI with prompt + user query
6. Return response
App
├── PlayerList
│ └── Player Cards (active + spectators)
├── PromptStack
│ └── Played Card History
├── VibeOutput
│ ├── Query Input
│ ├── Generate Button
│ └── AI Response Display
└── CardDeck
├── Card Grid (5 cards)
└── Custom Card Modal
App Component State:
├── sessionId: string
├── playerName: string
├── session: Session
├── players: Player[]
├── stack: PlayedCard[]
├── hand: Card[]
├── currentPlayer: Player
├── isMyTurn: boolean
├── latestResult: PromptResult
└── generating: boolean
Effects:
├── SSE Connection (updates on message)
└── fetchSessionState (on join/updates)
- No Authentication: Sessions are public by ID
- Input Validation: Sanitize all user inputs
- Rate Limiting: Consider adding to /generate endpoint
- XSS Protection: React handles by default
- CSRF: Not applicable (no cookies/sessions)
- Cold Start: ~2-3 seconds for database init
- SSE Overhead: ~1KB per client per 30s (heartbeat)
- Database: SQLite in-memory for active sessions
- OpenAI: 2-5 seconds per generation (gpt-4o-mini)
- Concurrent Players: Tested with 10 simultaneous users
- Session Cleanup: Cron job to archive old sessions
- Connection Pooling: Val Town handles automatically
- CDN for Static Assets: Val Town edge caching
- Database Sharding: Separate tables per date range
- Rate Limiting: Add Redis-like counter for /generate
Legend:
- → : Data flow
- ↓ : Sequential step
- ├── : Component/structure hierarchy
- (FK) : Foreign key
- (PK) : Primary key