• Blog
  • Docs
  • Pricing
  • We’re hiring!
Log inSign up
kamenxrider

kamenxrider

readwise-mastra

Public
Like
readwise-mastra
Home
Code
7
app
1
jobs
1
lib
6
web
3
grounding.md
keyupdates.md
plan.md
Environment variables
6
Branches
1
Pull requests
Remixes
History
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Sign up now
Code
/
keyupdates.md
Code
/
keyupdates.md
Search
…
Viewing readonly version of main branch: v34
View latest version
keyupdates.md

Key Updates Needed:

  1. Correct Endpoint & Auth (CRITICAL) Change from: https://text.pollinations.ai/openai ❌ Change to: https://gen.pollinations.ai/v1/chat/completions ✅ Keep: Authorization: Bearer ${token} ✅
  2. Model Discovery & Smart Selection (HIGH PRIORITY) GET /v1/models - List available text models (OpenAI-compatible) GET /text/models - Rich metadata (pricing, tools, reasoning, context_window) GET /image/models - Image models for optional cover images Model Strategy:

Primary: gemini (cheap, good default) Fallback: openai (baseline compatibility) Complex tasks: gemini-large (higher quality planning) Search tasks: gemini-search (has google_search enabled) 3. Gemini Special Features (NEW CAPABILITIES) gemini, gemini-large have code_execution enabled (can generate images/plots!) gemini-search has google_search enabled Responses may include content_blocks with image_url, text, or thinking types 4. Vision Support (BONUS FEATURE) Can pass images in messages: { "type": "image_url", "image_url": {"url": "https://..."} } Useful for document screenshots/analysis! 5. Audio Support (BONUS) Text-to-speech with voices: alloy, echo, fable, onyx, shimmer, etc. Input audio supported 6. Image Generation (EASY WIN) GET /image/{prompt}?model=flux Models: flux (default), turbo, gptimage, seedream, etc. Could add /api/image/cover for project thumbnails 7. Account Management GET /account/balance - Check pollen balance GET /account/profile - User tier info GET /account/usage - Spending history with token counts & costs 8. Model Selection Logic Should add a table in SQLite to cache discovered models:

CREATE TABLE model_cache ( id TEXT PRIMARY KEY, name TEXT, type TEXT, -- 'text' or 'image' tools BOOLEAN, reasoning BOOLEAN, context_window INTEGER, pricing_json TEXT, last_updated TEXT ) 9. Response Format Options response_format: Can request JSON mode stream: Streaming support for real-time responses reasoning_effort: Control thinking depth (none/minimal/low/medium/high/xhigh) Should I Implement These Updates? The most critical changes are:

✅ Fix endpoint URL to gen.pollinations.ai ✅ Add model discovery with /v1/models and /text/models ✅ Implement smart model fallback (gemini → openai → gemini-large) ✅ Cache model info in SQLite ⚠️ Optional: Add image generation for project covers ⚠️ Optional: Add account balance checking Would you like me to implement the critical updates (1-4) now?

32% context remaining Completed in 26.8 sec 1 to 6 p[lease

I'll implement updates 1-6 now. Let me start by updating the Pollinations LLM wrapper and adding model discovery:

Stopped after 21.6 sec wait please first create the plan for this.. thats better

You're right, let me plan this out properly first.

Implementation Plan: Pollinations Integration Updates Overview Update lib/llm_pollinations.ts and related files to use the correct canonical endpoint, implement model discovery with caching, smart fallback logic, and add optional features like image generation and account management.

Phase 1: Fix Core Endpoint & Configuration File: lib/env.ts

Update default POLLINATIONS_BASE_URL from https://text.pollinations.ai/openai → https://gen.pollinations.ai Add POLLINATIONS_CHAT_PATH (default: /v1/chat/completions) Remove POLLINATIONS_MODEL env var (will be determined dynamically) Add optional POLLINATIONS_PRIMARY_MODEL for override (default: auto-discover) Changes:

// Old POLLINATIONS_BASE_URL: string; // default "https://text.pollinations.ai/openai" POLLINATIONS_MODEL: string; // default "openai"

// New POLLINATIONS_BASE_URL: string; // default "https://gen.pollinations.ai" POLLINATIONS_CHAT_PATH: string; // default "/v1/chat/completions" POLLINATIONS_PRIMARY_MODEL?: string; // optional override, otherwise auto-discover Phase 2: SQLite Model Cache Schema File: lib/db.ts

Add model_cache table for discovered models Add model_config table for storing selected primary/fallback models New Tables:

-- Discovered models from Pollinations API CREATE TABLE IF NOT EXISTS model_cache ( name TEXT PRIMARY KEY, type TEXT NOT NULL, -- 'text' or 'image' aliases TEXT, -- JSON array description TEXT, tools INTEGER DEFAULT 0, -- boolean: supports tool calling reasoning INTEGER DEFAULT 0, -- boolean: supports reasoning context_window INTEGER, pricing_json TEXT, -- JSON object input_modalities TEXT, -- JSON array output_modalities TEXT, -- JSON array last_updated TEXT NOT NULL );

-- Selected model configuration CREATE TABLE IF NOT EXISTS model_config ( key TEXT PRIMARY KEY, value TEXT NOT NULL, updated_at TEXT NOT NULL ); -- Keys: 'primary_text_model', 'fallback_text_models', 'primary_image_model', 'last_discovery' New Functions:

upsertModelCache(model) - Save discovered model getModelCache(type) - Get all cached models by type getModelConfig(key) - Get config value setModelConfig(key, value) - Set config value Phase 3: Model Discovery Service File: lib/llm_pollinations.ts (major rewrite)

New Functions:

// Discovery endpoints async function discoverTextModels(): Promise<TextModel[]> // GET https://gen.pollinations.ai/v1/models (OpenAI-compatible list) // GET https://gen.pollinations.ai/text/models (rich metadata)

async function discoverImageModels(): Promise<ImageModel[]> // GET https://gen.pollinations.ai/image/models

// Model selection logic async function selectBestModel(options: { needsTools?: boolean; needsReasoning?: boolean; minContextWindow?: number; preferCheap?: boolean; }): Promise

// Run discovery and cache results async function refreshModelCache(): Promise // Called on startup (if stale) and daily via interval job

// Get current model config with fallbacks async function getModelWithFallbacks(): Promise<{ primary: string; fallbacks: string[]; }> Model Selection Priority:

If POLLINATIONS_PRIMARY_MODEL env var set → use it Otherwise, from discovery: Primary: gemini (cheap, tools=true) Fallback 1: openai (broad compatibility) Fallback 2: gemini-large (complex tasks) Skip any that don't exist in /v1/models Phase 4: Smart Request Wrapper with Auto-Fallback File: lib/llm_pollinations.ts

Enhanced generateChat function:

async function generateChat( messages: ChatMessage[], options: { model?: string; // explicit model override temperature?: number; maxTokens?: number; useTools?: boolean; // hint for model selection complexTask?: boolean; // hint to use stronger model retryWithFallback?: boolean; // default true } ): Promise Logic:

Select model based on options/hints Make request to POST /v1/chat/completions On error (model unavailable, rate limit, etc.): If retryWithFallback=true, try next fallback model Log which model was used Return result with modelUsed field New Response Type:

interface GenerateResult { text: string; raw: ChatCompletionResponse; modelUsed: string; fallbacksAttempted: string[]; } Phase 5: Account Management File: lib/llm_pollinations.ts

New Functions:

// Check pollen balance async function getAccountBalance(): Promise<{ balance: number }> // GET /account/balance

// Get user profile/tier async function getAccountProfile(): Promise<{ name: string; tier: 'anonymous' | 'spore' | 'seed' | 'flower' | 'nectar' | 'router'; nextResetAt: string; }> // GET /account/profile

// Get usage history async function getAccountUsage(limit?: number): Promise<UsageRecord[]> // GET /account/usage File: app/http.ts

New API Endpoints:

GET /api/ai/balance → { balance: number } GET /api/ai/profile → { tier, nextResetAt, ... } GET /api/ai/usage → { usage: [...], count } GET /api/ai/models → { text: [...], image: [...], config: {...} } POST /api/ai/discover → Trigger model rediscovery (admin only) Phase 6: Image Generation File: lib/llm_pollinations.ts

New Functions:

// Generate image from prompt async function generateImage( prompt: string, options?: { model?: string; // default 'flux' width?: number; height?: number; } ): Promise<{ imageUrl: string; model: string }> // GET /image/{prompt}?model=flux // Returns binary image - we'll need to either: // a) Return the URL directly (simpler) // b) Store in blob storage and return our URL

// List available image models async function getImageModels(): Promise<ImageModel[]> File: app/http.ts

New API Endpoints:

POST /api/image/generate → { prompt, model? } → { imageUrl } GET /api/image/models → { models: [...] } Use Cases:

Generate project cover images Visual summaries of document collections AI-generated thumbnails for docs without images Phase 7: Update Interval Job for Model Discovery File: jobs/sync/interval.ts

Add to existing job:

export default async function() { // Existing sync logic...

// Also refresh model cache daily const lastDiscovery = await getModelConfig('last_discovery'); const oneDayAgo = Date.now() - 24 * 60 * 60 * 1000;

if (!lastDiscovery || new Date(lastDiscovery).getTime() < oneDayAgo) { await refreshModelCache(); } } Phase 8: Update Agent to Use Smart Model Selection File: lib/agent.ts

Changes:

Use generateChat with complexTask: true for plan generation Use generateChat with useTools: false, preferCheap: true for simple tasks Handle content_blocks in responses (for Gemini's code_execution output) File Change Summary File Changes lib/env.ts Update defaults, new env vars lib/db.ts Add model_cache, model_config tables lib/llm_pollinations.ts Major rewrite: discovery, fallbacks, images, account lib/agent.ts Use smart model selection app/http.ts New endpoints: /api/ai/, /api/image/ jobs/sync/interval.ts Add model cache refresh Execution Order Phase 1: Update lib/env.ts (quick, unblocks everything) Phase 2: Update lib/db.ts (add tables) Phase 3-4: Rewrite lib/llm_pollinations.ts (core logic) Phase 5: Add account management to lib/llm_pollinations.ts Phase 6: Add image generation to lib/llm_pollinations.ts Phase 7: Update jobs/sync/interval.ts Phase 8: Update lib/agent.ts Phase 9: Add new API routes to app/http.ts

FeaturesVersion controlCode intelligenceCLIMCP
Use cases
TeamsAI agentsSlackGTM
DocsShowcaseTemplatesNewestTrendingAPI examplesNPM packages
PricingNewsletterBlogAboutCareers
We’re hiring!
Brandhi@val.townStatus
X (Twitter)
Discord community
GitHub discussions
YouTube channel
Bluesky
Open Source Pledge
Terms of usePrivacy policyAbuse contact
© 2026 Val Town, Inc.