Scrape users from any GitHub repo, and automatically ingest into Clay. This val scrapes the username of anyone who has created an issue, reacted to an issues, commented on an issue, reacted to the comments, starred, or forked any repo.
-
Click
Remixon the top-right to get a copy of it -
Set up a Clay workbook with a
Webhookcolumn -
Copy your Clay workbook's
WebhookURL -
Set that as
CLAY_WEBHOOK_URLin this val'sEnvironment variableson the left sidebar -
In config.ts, replace
GITHUB_REPOwith the full repository name (format:"owner/repo") -
And that's it! The cron will run on your repo every 30 minutes from now on. To test it out immediately, navigate to main.ts and click
Run.
- config.ts - Configuration settings for the scraper
- database.ts - SQLite database for tracking sent users
- clay.ts - Send data to Clay
- github.ts - Collects engaged users and fetches usernames
- main.ts - Cron trigger that orchestrates the scraping and Clay integration
This scraper automatically tracks which users have been sent to Clay to prevent
duplicates. The feature is controlled by the ENABLE_DEDUPLICATION flag in
config.ts (enabled by default).
How it works:
- Users are stored in a SQLite database after being successfully sent to Clay
- On subsequent runs, the scraper checks the database and only sends new users
- The database persists across all cron runs, so users are only sent once
- Each user is tracked by their GitHub username and source repository
Database details:
- Users are tracked in the
tracked_userstable - Stores: username, source repo, and timestamp of first encounter
- The database automatically initializes on the first run
To disable deduplication:
Set ENABLE_DEDUPLICATION = false in config.ts to send all
engaged users on every run (useful for testing or if you want to manage
deduplication in Clay instead).
On larger repos, you may get rate-limited by GitHub. To mitigate this, Val Town
uses a proxied fetch that reroutes requests
using a proxy vendor so that requests obtain different IP addresses. It also
automatically retries failed requests several times. Note that Note that using
std/fetch will be significantly slower than directly calling the Javascript
Fetch API due to extra network hops.