There's a clear gap in the Chicago cultural information ecosystem: no Bluesky bot exists to notify residents about free museum days.
The information aggregation problem is largely solved by existing websites (Choose Chicago, Upparent, Do312), but the distribution/discoverability problem is not.
A Bluesky bot would:
- Passively surface free days to people who already use the platform
- Fill a civic information niche that's underserved on Bluesky
- Require minimal code and maintenance
- Generate reusable data for other formats (ICS feed, embeddable widget)
| Site | Format | Calendar | ICS/Subscribe | Filtering | Strengths | Gaps |
|---|---|---|---|---|---|---|
| Choose Chicago | Editorial | β | β | β | Official, comprehensive | Static, no interactivity |
| Upparent | Interactive list | β Date picker | Newsletter | β Sort/filter | Calendar view, ratings | Parent-focused, no ICS |
| Do312 | Long-form guide | β | Newsletter | β | Rich descriptions | Must manually parse |
| My Kid List | Editorial list | β | Daily/weekly email | β | Comprehensive, updated | Static layout |
| Loop Chicago | Blog post | β | β | β | Clean layout | Annual refresh only |
@FreeMuseumDay on Twitter
- Covers: Portland, SF, Boston, Chicago, LA, NYC, Seattle
- Links to freemuseumday.org for details
- Status: Appears dormant or minimally active on Chicago posts
- No free museum days bot exists on Bluesky
- Bluesky is growing rapidly but has fewer bots than Twitter
- Civic information niche is underserved
- Lower noise/competition for attention
These do the heavy lifting of compiling from individual institutions. Scrape these instead of hitting each museum:
-
- Format: H3 headings per museum + bulleted date lists
- Structure: Clean semantic HTML (
<ul><li>) - Update frequency: Periodic (2-3x per year)
- Scraping strategy: Parse H3 for institution name, extract
<li>dates
-
- Format: Museum section + free day listings
- Structure: Similar semantic HTML layout
- Update frequency: Annual refresh (early January)
- Scraping strategy: Same as above
Recurring patterns (no scraping needed, hardcode rules):
- Field Museum: Every Wednesday
- DuSable Black History Museum: Every Wednesday
- Peggy Notebaert Nature Museum: Every Thursday
- MCA Chicago: Every Tuesday 5-9pm (year-round)
- Lincoln Park Zoo: Every day (always free)
- National Museum of Mexican Art: Every day (always free)
- Chicago Cultural Center: Every day (always free)
Scattered/seasonal dates (need scraping):
- Art Institute: Seasonal weekday windows (Jan-Feb, etc.)
- Shedd Aquarium: Specific date ranges, varies by month
- Griffin MSI: 15-20 specific dates scattered across year
- Chicago History Museum: Mix of holidays + promotional dates
- Adler Planetarium: Scattered dates + seasonal Tuesday nights
- Chicago Botanic Garden: Specific dates, requires registration
Scraping challenges:
- Some sites use JavaScript rendering (need Playwright/Puppeteer)
- Some return 403 on direct fetches (Shedd)
- Date formats inconsistent within and across sites
- Eligibility rules embedded in prose
- Checked data.cityofchicago.org β no dedicated museum free days API
- Not likely to exist in the near future
Free days fall into two categories that need different handling:
Recurring patterns:
- institution: "Field Museum" free_day_type: "recurring" pattern: "every Wednesday" time: "9am-5pm" proof_of_residency: "Illinois ID required"
Scattered/seasonal dates:
- institution: "Art Institute" free_day_type: "dated" dates: - start: "2026-01-05" end: "2026-02-28" pattern: "weekdays only" time: "11am-close" proof_of_residency: "ZIP code verification"
Normalization needs:
- Convert prose patterns ("January 19-23, 27-30") β actual date lists
- Standardize time formats
- Document proof-of-residency requirements consistently
- Flag when scraping source becomes stale
-
Minimal code required
- ~100 lines of TypeScript/JavaScript
- Bluesky SDK is simple and well-documented
- GitHub Actions handles scheduling
-
Zero infrastructure cost
- GitHub Actions is free
- No database (data lives in git)
- No hosting fees
-
Low maintenance burden
- Monthly or quarterly scrape updates
- GitHub Actions logs for monitoring
- Community PRs can help maintain data
-
Proven format for Bluesky
- Simple daily/weekly posts work well on platform
- Civic information is valued by Bluesky community
- Low friction to follow and engage
-
Reusable data pipeline
- Scraper output feeds bot, ICS feed, future embeddable widget
- Single source of truth in version control
- Easy to validate and review changes
- Aggregator sites already solve this problem adequately
- Users are fragmented across multiple platforms (email, calendar, social)
- Web traffic for this niche is low
- Maintenance burden higher than a bot
- Lower engagement than social platform for casual discovery
- Requires email list building and maintenance
- Subject to spam filters
- Already exists in limited form (My Kid List offers daily/weekly emails)
- No competition. Twitter has @FreeMuseumDay (dormant). Bluesky has nothing.
- Growing audience. Bluesky is experiencing rapid adoption, especially among civic-minded users.
- Civic information is valued. Bluesky users appreciate local information, mutual aid bots, etc.
- Easier to build bot culture. Fewer bots total = less noise, more discovery.
- Lower barrier to entry. Small bots can find their audience; don't need millions of followers.
Choose Chicago / Loop Chicago (web) β Scraper (Node.js + cheerio or similar) β Normalize to YAML (recutils format) β GitHub repo with git history β [Branch 1] Bluesky Bot (posts daily/weekly) [Branch 2] ICS Feed Generator (for calendar apps) [Branch 3] Future: Embeddable widget
Stack:
- Node.js + TypeScript (or Python)
- Bluesky SDK (
@atproto/api) - GitHub Actions (cron schedule)
- Data: YAML files in git
Posting schedule:
- Monday morning: "Here's what's free this week"
- Could also do: Daily post, weekly digest, or real-time alerts
Example bot post:
ποΈ What's free in Chicago this week:
β’ Art Institute: Mon-Fri (weekdays only)
β’ Field Museum: Wednesday
β’ MCA: Tuesday evenings 5-9pm
+ 3 more (thread π§΅)
Data source: github.com/your-repo
Use YAML or recutils (your prior research interest):
YAML approach:
institutions: - id: "art-institute" name: "Art Institute of Chicago" address: "111 S Michigan Ave" website: "artic.edu" free_days: - type: "seasonal" periods: - name: "Winter Weekdays" start: "2026-01-05" end: "2026-02-28" pattern: "Mon, Wed, Thu, Fri" time: "11am-close" proof_of_residency: "ZIP code verification" - type: "permanent" eligibility: - "Children under 14 (always free)" - "Chicago residents under 18 (always free)" - id: "field-museum" name: "Field Museum" address: "1400 S Lake Shore Dr" website: "fieldmuseum.org" free_days: - type: "recurring" pattern: "every Wednesday" time: "9am-5pm" proof_of_residency: "Illinois ID required"
Recutils approach:
id: art-institute
name: Art Institute of Chicago
address: 111 S Michigan Ave
website: artic.edu
free_day_type: seasonal
free_period_start: 2026-01-05
free_period_end: 2026-02-28
free_pattern: Mon,Wed,Thu,Fri
free_time: 11am-close
proof_of_residency: ZIP code verification
Either format works; choose based on preference.
-
Audit the scraping targets (Done β)
- Choose Chicago and Loop Chicago are primary targets
- Both have semantic HTML suitable for scraping
- Update frequency: 2-3x per year
-
Build a minimal scraper
- Extract one institution's data from Choose Chicago
- Test date normalization logic
- Validate against official museum sites
-
Create test data file
- Manually enter ~15 institutions with current free days
- Structure in YAML/recutils
- This becomes the seed for bot posts
-
Build minimal bot
- Post hardcoded message to test Bluesky SDK
- Schedule with GitHub Actions
- Verify it works
-
Automate scraper β data
- Hook scraper into GitHub Actions workflow
- Run weekly/monthly, commit changes as PRs
- Review diffs before merging
-
Launch bot
- Post sample messages for next week
- Seed with followers (share in Chicago communities)
- Monitor engagement
-
Low bar for success:
- Bot posts on schedule without errors
- Data stays in sync with institution calendars
- Minimal manual maintenance (<1hr/month)
-
Bonus outcomes:
- Community contributions (PRs with new institutions or corrections)
- ICS feed reuse (users subscribe in calendar apps)
- Potential expansion to other cities
-
Failure case:
- Requires more than 2-3 hours of maintenance per month
- Museums change data format too frequently
- Bluesky account gets no engagement (unlikely, but acceptable)
Even failure is low-cost: you've proven the concept and built reusable data/code.
| Risk | Likelihood | Mitigation |
|---|---|---|
| Scraper breaks when site changes | Medium | Monitor scraper logs, pin HTML selectors to institutions, accept manual updates |
| Date format inconsistencies | High | Build robust date parser, test against 2-3 years of historical data |
| Proof-of-residency rules change | Low | Version control, document changes in git history |
| Low bot engagement | Low (civic info is valued) | Share in Chicago subreddits, tag museums, local journalists |
| Maintenance burden grows | Medium | Keep data structure simple, automate everything possible |
Once the bot is stable, generating an ICS calendar feed is trivial:
// Pseudocode
institutions.forEach((inst) => {
inst.free_days.forEach((day) => {
const event = {
summary: `Free: ${inst.name}`,
date: day.date,
description: `Address: ${inst.address}\nProof: ${day.proof_of_residency}`,
};
calendar.addEvent(event);
});
});
fs.writeFileSync("chicago-free-days.ics", calendar.toICS());
Users could subscribe in Google Calendar, Apple Calendar, Outlook. This is Direction 2 from your research.
Building a Bluesky bot is the highest-ROI first step because it:
- Validates the data collection and normalization process
- Fills a genuine gap on the platform
- Requires minimal code and infrastructure
- Generates reusable data for ICS feed and future formats
- Is low-cost to maintain or abandon
The bar for success is low. The potential upside is genuine (civic value + community engagement). The learning is high (data pipeline, scraping, bot APIs, git workflows).
Worth building.