FeaturesTemplatesShowcaseTownie
AI
BlogDocsPricing
Log inSign up
helge

helge

doodle

Public
Like
1
doodle
Home
Code
10
.vtignore
README.md
REDIS_OPTIMIZATION.md
SQLITE_MIGRATION_PLAN.md
admin.http.ts
answer.http.ts
deno.json
H
doodle.http.ts
doodle_api.http.ts
H
doodle_sqlite.http.ts
Branches
1
Pull requests
Remixes
1
History
Environment variables
2
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Sign up now
Code
/
REDIS_OPTIMIZATION.md
Code
/
REDIS_OPTIMIZATION.md
Search
6/24/2025
Viewing readonly version of main branch: v48
View latest version
REDIS_OPTIMIZATION.md

Redis Usage Analysis & Optimization Guide

Current Usage Metrics

  • 1.8k reads/hour during development (higher than optimal)
  • Main causes: Auto-refresh intervals + No caching + Inefficient queries

Key Issues Identified

1. Frontend Auto-Refresh

  • Admin page: Refreshes every 5 seconds
  • Answer page: Refreshes every 10 seconds
  • Each refresh triggers multiple Redis operations

2. Inefficient Statistics Calculation

  • GET /events/:id recalculates stats on every request
  • Uses redis.keys() pattern matching (expensive operation)
  • Fetches all answers for the event every time
  • No caching of computed results

3. Duplicate Event Reads

  • Multiple endpoints read the same event data without caching
  • Event validation happens on every answer submission

Optimization Recommendations

Quick Wins (Implement First)

1. Reduce Auto-Refresh Frequency

// Admin page (admin.http.ts) setInterval(loadEvent, 30000); // Change from 5000 to 30000 // Answer page (answer.http.ts) setInterval(loadEvent, 30000); // Change from 10000 to 30000

2. Smart Polling (Only Refresh When Visible)

setInterval(() => { if (document.visibilityState === 'visible') { loadEvent(); } }, 30000);

Medium-Term Improvements

3. Cache Computed Statistics

Store pre-calculated stats with TTL:

// After calculating stats in GET /events/:id await redis.setex(`event:${id}:stats`, 300, JSON.stringify({ name: event.name, options: event.options, stats, answers_by_option: answersByOption, total_participants: participants.size, }));

4. Replace redis.keys() with Redis Sets

// When creating/updating answer await redis.sadd(`event:${id}:participants`, person_name); // When fetching (instead of redis.keys) const participants = await redis.smembers(`event:${id}:participants`);

Long-Term Optimizations

5. Implement Incremental Stats Updates

Update statistics when answers change rather than recalculating:

// When answer is submitted/updated await redis.hincrby(`event:${id}:stats`, option, 1); await redis.hincrby(`event:${id}:stats`, `${oldOption}`, -1);

6. Add In-Memory Caching Layer

Cache frequently accessed data with short TTL in the API layer.

Expected Impact

  • 60-80% reduction in Redis reads
  • From ~1,800 reads/hour to ~300-500 reads/hour
  • Improved response times for stats-heavy endpoints
  • Better scalability for events with many participants

Implementation Priority

  1. ✅ Reduce auto-refresh intervals (5 min work)
  2. ✅ Add visibility-based polling (10 min work)
  3. ⏳ Cache computed statistics (30 min work)
  4. ⏳ Replace redis.keys() with Sets (1 hour work)
  5. ⏳ Implement incremental stats (2 hours work)

Monitoring

After implementing optimizations, monitor:

  • Redis read/write operations per hour
  • Response times for GET /events/:id
  • User experience impact (any delays in updates?)
FeaturesVersion controlCode intelligenceCLI
Use cases
TeamsAI agentsSlackGTM
ExploreDocsShowcaseTemplatesNewestTrendingAPI examplesNPM packages
PricingNewsletterBlogAboutCareersBrandhi@val.townStatus
X (Twitter)
Discord community
GitHub discussions
YouTube channel
Bluesky
Terms of usePrivacy policyAbuse contact
© 2025 Val Town, Inc.