Your earlier direction was mostly right (OpenAI-compatible chat completions +
Bearer key), but the correct canonical host/endpoint from paste.txt is
https://gen.pollinations.ai/v1/chat/completions (and /openai is explicitly
marked legacy/deprecated).[1] So Townie should implement Pollinations against
gen.pollinations.ai by default, keep the base URL configurable, and add model
discovery + fallback selection.[2][1]
- Set defaults to:
POLLINATIONS_BASE_URL = "https://gen.pollinations.ai"[1]POLLINATIONS_CHAT_PATH = "/v1/chat/completions"[1]Authorization: Bearer ${POLLINATIONS_TOKEN}header.[1]
-
Do not use
/openaiexcept as an emergency fallback, because the docs say it’s a legacy endpoint and recommends/v1/chat/completionsinstead.[1] -
Use secret keys (
sk_) only on the server side (Val Town env var), because publishable keys are beta and explicitly warned against for production client-side use.[2][1]
Pollinations explicitly supports model discovery endpoints, so Townie should implement runtime discovery and pick models based on capabilities and pricing metadata instead of hardcoding names.[1]
- On startup (and daily), call
GET https://gen.pollinations.ai/v1/modelsto get available OpenAI-compatible text models.[1] - Also call
GET https://gen.pollinations.ai/text/modelsto get richer model metadata including pricing and capability flags liketools,reasoning, andcontext_window.[1] - For images, call
GET https://gen.pollinations.ai/image/modelsto enumerate image models and pricing.[1]
Use this ordered list, but only if each model exists in /v1/models (otherwise
skip it):[1]
- Primary:
gemini(cheap/good default).[1] - Fallback 1:
openai(broad compatibility baseline per examples).[1] - Fallback 2:
gemini-large(use when you need higher quality / harder planning).[1] - Optional:
gemini-searchonly if you intentionally add “web search” features, since it’s described as havinggoogle_searchenabled.[1]
Townie should store the chosen primary/fallback model IDs in SQLite (or config) after discovery so requests don’t depend on discovery every time.[2][1]
- Inbox triage + tagging suggestions: use primary (
gemini) to keep costs low.[1] - AI “plan” generation for large batches (projects/apply rules): allow
escalation to
gemini-largewhen the task is complex or the first attempt fails validation.[1] - If Pollinations returns model errors, automatically retry with the next fallback in order.[1]
Pollinations supports image generation at
GET https://gen.pollinations.ai/image/<prompt>?model=flux with Bearer auth, so
Townie can implement a /api/image/cover endpoint (or similar) for project
cover images or “visual summaries.”[1] Also implement image model fallback using
/image/models rather than hardcoding flux everywhere (but flux is a fine
default).[1]
Readwise Reader API auth stays Authorization: Token XXX (separate from
Pollinations), stored in Val Town env vars.[3][2] Keep Pollinations and Readwise
tokens server-side only (Val Town environment variables), because Val Town
endpoints are public-by-URL unless you add auth.[2]
If you want, I can give Townie an exact “decision function” (TypeScript) that
consumes /text/models and picks the cheapest model that still has tools=true
(if you need tool calling) vs tools=false (for pure summarization).[1]
Citations: [1]
paste.txt
[2]
llms-full-valtown.txt
[3] Reader API