hireping
Val Town is a collaborative website to build and scale JavaScript apps.
Deploy APIs, crons, & store data – all from the browser, and deployed in milliseconds.
Viewing readonly version of main branch: v16View latest version
This project contains a Val Town cron job that watches the Anthropic Greenhouse job board and tracks changes over time.
anthropic-job-tracker-cron.ts:
- Fetches all pages from
https://job-boards.greenhouse.io/anthropic - Extracts job post URLs from the page's
window.__remixContextdata - Visits each job post page and stores the raw job payload JSON
- Compares the latest scrape against previously stored jobs
- Detects newly added jobs
- Detects removed jobs and archives them
- Records each time a job was observed
- Emails a summary when new jobs are found
The script runs as an exported async default function, which makes it suitable for use as a Val Town scheduled job.
Its flow is:
- Ensure the SQLite schema exists
- Load previously known jobs from the
jobstable - Scrape the Anthropic Greenhouse board page by page
- Hydrate each job with its detailed page payload
- Compute:
newJobs: jobs now present that were not previously storedremovedJobs: jobs previously stored that no longer appear
- Insert observation records for all currently visible jobs
- Upsert removed jobs into
archived_jobs - Replace the current contents of
jobswith the latest snapshot - Send an email if any new jobs were found
The script creates and uses three SQLite tables:
Current snapshot of active job postings.
url- primary keyraw_json- serialized job payload from the job detail pagelast_seen_at- timestamp of the most recent sync
Jobs that used to exist on the board but disappeared in a later run.
url- primary keyraw_json- last known serialized payloadarchived_at- timestamp when the removal was detected
Append-only history of when a job was seen.
urlseen_at
When new jobs are detected, the script sends an email with:
- the number of new jobs
- each job title, if available
- each job URL
At the end of a successful run, the function returns an object like:
{
ok: true,
syncedAt: "2026-04-09T00:00:00.000Z",
count: 42,
newCount: 3,
archivedCount: 1
}
- The scraper depends on Greenhouse exposing data inside
window.__remixContext - Job details are cached indirectly by reusing stored
raw_jsonwhen a known URL already exists syncJobs()replaces thejobstable contents on every run so that table always reflects the latest snapshot
