There's a clear gap in the Chicago cultural information ecosystem: no Bluesky bot exists to notify residents about free museum days.
The information aggregation problem is largely solved by existing websites (Choose Chicago, Upparent, Do312), but the distribution/discoverability problem is not.
A Bluesky bot would:
Passively surface free days to people who already use the platform
Fill a civic information niche that's underserved on Bluesky
Require minimal code and maintenance
Generate reusable data for other formats (ICS feed, embeddable widget)
Aggregator sites already solve this problem adequately
Users are fragmented across multiple platforms (email, calendar, social)
Web traffic for this niche is low
Maintenance burden higher than a bot
Why NOT an Email Newsletter
Lower engagement than social platform for casual discovery
Requires email list building and maintenance
Subject to spam filters
Already exists in limited form (My Kid List offers daily/weekly emails)
Why Bluesky Specifically
No competition. Twitter has @FreeMuseumDay (dormant). Bluesky has nothing.
Growing audience. Bluesky is experiencing rapid adoption, especially among civic-minded users.
Civic information is valued. Bluesky users appreciate local information, mutual aid bots, etc.
Easier to build bot culture. Fewer bots total = less noise, more discovery.
Lower barrier to entry. Small bots can find their audience; don't need millions of followers.
Technical Approach
Data Pipeline
Choose Chicago / Loop Chicago (web)
↓
Scraper (Node.js + cheerio or similar)
↓
Normalize to YAML (recutils format)
↓
GitHub repo with git history
↓
[Branch 1] Bluesky Bot (posts daily/weekly)
[Branch 2] ICS Feed Generator (for calendar apps)
[Branch 3] Future: Embeddable widget
Minimal Bot Implementation
Stack:
Node.js + TypeScript (or Python)
Bluesky SDK (@atproto/api)
GitHub Actions (cron schedule)
Data: YAML files in git
Posting schedule:
Monday morning: "Here's what's free this week"
Could also do: Daily post, weekly digest, or real-time alerts
Example bot post:
🏛️ What's free in Chicago this week:
• Art Institute: Mon-Fri (weekdays only)
• Field Museum: Wednesday
• MCA: Tuesday evenings 5-9pm
+ 3 more (thread 🧵)
Data source: github.com/your-repo
Data Storage Format
Use YAML or recutils (your prior research interest):
id: art-institute
name: Art Institute of Chicago
address: 111 S Michigan Ave
website: artic.edu
free_day_type: seasonal
free_period_start: 2026-01-05
free_period_end: 2026-02-28
free_pattern: Mon,Wed,Thu,Fri
free_time: 11am-close
proof_of_residency: ZIP code verification
Either format works; choose based on preference.
First Steps to Validate
Audit the scraping targets (Done ✓)
Choose Chicago and Loop Chicago are primary targets
Both have semantic HTML suitable for scraping
Update frequency: 2-3x per year
Build a minimal scraper
Extract one institution's data from Choose Chicago
Test date normalization logic
Validate against official museum sites
Create test data file
Manually enter ~15 institutions with current free days
Structure in YAML/recutils
This becomes the seed for bot posts
Build minimal bot
Post hardcoded message to test Bluesky SDK
Schedule with GitHub Actions
Verify it works
Automate scraper → data
Hook scraper into GitHub Actions workflow
Run weekly/monthly, commit changes as PRs
Review diffs before merging
Launch bot
Post sample messages for next week
Seed with followers (share in Chicago communities)
Monitor engagement
Success Metrics
Low bar for success:
Bot posts on schedule without errors
Data stays in sync with institution calendars
Minimal manual maintenance (<1hr/month)
Bonus outcomes:
Community contributions (PRs with new institutions or corrections)
ICS feed reuse (users subscribe in calendar apps)
Potential expansion to other cities
Failure case:
Requires more than 2-3 hours of maintenance per month
Museums change data format too frequently
Bluesky account gets no engagement (unlikely, but acceptable)
Even failure is low-cost: you've proven the concept and built reusable data/code.
Risks & Mitigations
Risk
Likelihood
Mitigation
Scraper breaks when site changes
Medium
Monitor scraper logs, pin HTML selectors to institutions, accept manual updates
Date format inconsistencies
High
Build robust date parser, test against 2-3 years of historical data
Proof-of-residency rules change
Low
Version control, document changes in git history
Low bot engagement
Low (civic info is valued)
Share in Chicago subreddits, tag museums, local journalists
Maintenance burden grows
Medium
Keep data structure simple, automate everything possible
Next: The ICS Feed
Once the bot is stable, generating an ICS calendar feed is trivial:
Users could subscribe in Google Calendar, Apple Calendar, Outlook. This is Direction 2 from your research.
Conclusion
Building a Bluesky bot is the highest-ROI first step because it:
Validates the data collection and normalization process
Fills a genuine gap on the platform
Requires minimal code and infrastructure
Generates reusable data for ICS feed and future formats
Is low-cost to maintain or abandon
The bar for success is low. The potential upside is genuine (civic value + community engagement). The learning is high (data pipeline, scraping, bot APIs, git workflows).