Purpose: Replace the existing research search UI with a simple, guided Q&A flow that displays many source wordings (formal, business, casual, social, etc.) and captures how the user would naturally say the same thing. Collected answers form a small dataset that can be used to fine‑tune a LoRA on the user's casual style.
Goals
Present prompts from a JSON file (no model required to ask questions)
Capture answers via text input and optional voice dictation (Groq Whisper)
Keep the clean, minimal design from the current app (same fonts, spacing, Tailwind) and keep the API key UX
Export answers as JSON for downstream fine‑tuning
Data Model
Prompts JSON: array of items
id: string
label: short descriptor
instruction: what to do (e.g., "Rewrite this in your style")
source_text: the wording to adapt
tags: string[] (optional)
Answers (client-side):
question_id: string
answer_text: string
transcript_text?: string (if captured via Whisper)
created_at: ISO timestamp
Frontend Flow
Load prompts from /data/questions.json
Show current prompt card (instruction + source text)
Provide a text area for "Your version"
Optional mic button to record and transcribe with Whisper; user can insert the transcript as their answer
Navigation: Previous / Next; progress indicator
Export: Download JSON of all answers
Persist API key (Groq) in localStorage; server key supported if present
Backend Endpoints (Hono)
GET /data/questions.json → serve local JSON file (Val Town compatible loader)
POST /api/transcribe → forwards uploaded audio to Groq Whisper (model: whisper-large-v3) and returns { text }
Existing /api/check-key retained for client key handling