readwise-mastra
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Viewing readonly version of main branch: v35View latest version
Status: ⬜ Not Started
File: lib/env.ts
- Update default
POLLINATIONS_BASE_URL→https://gen.pollinations.ai - Add
POLLINATIONS_CHAT_PATH(default:/v1/chat/completions) - Remove
POLLINATIONS_MODELenv var (will be dynamic) - Add optional
POLLINATIONS_PRIMARY_MODELfor override
Status: ⬜ Not Started
File: lib/db.ts
- Add
model_cachetable for discovered models - Add
model_configtable for selected primary/fallback models - Add
upsertModelCache(model)function - Add
getModelCache(type)function - Add
getModelConfig(key)/setModelConfig(key, value)functions
Schema:
CREATE TABLE IF NOT EXISTS model_cache (
name TEXT PRIMARY KEY,
type TEXT NOT NULL, -- 'text' or 'image'
aliases TEXT, -- JSON array
description TEXT,
tools INTEGER DEFAULT 0, -- boolean
reasoning INTEGER DEFAULT 0, -- boolean
context_window INTEGER,
pricing_json TEXT,
input_modalities TEXT, -- JSON array
output_modalities TEXT, -- JSON array
last_updated TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS model_config (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TEXT NOT NULL
);
Status: ⬜ Not Started
File: lib/llm_pollinations.ts
- Add
discoverTextModels()- GET /v1/models + /text/models - Add
discoverImageModels()- GET /image/models - Add
refreshModelCache()- run discovery and cache to SQLite - Add
selectBestModel(options)- pick model based on needs - Add
getModelWithFallbacks()- return primary + fallback list
Model Priority:
gemini(cheap, tools=true) - primaryopenai(baseline) - fallback 1gemini-large(complex tasks) - fallback 2
Status: ⬜ Not Started
File: lib/llm_pollinations.ts
- Update
generateChat()with model hints (useTools, complexTask) - Add auto-retry with fallback models on error
- Add
modelUsedandfallbacksAttemptedto response - Update
generateJSON()to use new wrapper - Update
generateText()to use new wrapper
Status: ⬜ Not Started
File: lib/llm_pollinations.ts
- Add
generateImage(prompt, options)function - Download image and store in Val Town blob storage
- Add
getImageModels()function
File: app/http.ts
- Add
POST /api/image/generateendpoint - Add
GET /api/image/modelsendpoint
Status: ⬜ Not Started
File: jobs/sync/interval.ts
- Add model cache refresh (daily)
- Check
last_discoverytimestamp before refreshing
Status: ⬜ Not Started
File: lib/agent.ts
- Use
complexTask: truefor plan generation - Handle
content_blocksin Gemini responses - Use cheaper model for simple triage suggestions
Status: ⬜ Not Started
File: app/http.ts
- Add
GET /api/ai/models- list cached models + config - Add
POST /api/ai/discover- trigger model rediscovery
(none yet)
- Image storage: Using Val Town blob storage (option B)
- Model discovery: Once daily via interval job
- No account balance tracking (provider-agnostic design)
- Plan stored in
grounding.md