Automatically ingest GitHub accounts of everyone who interacts with your repo into Clay.
This val tracks anyone who:
- Created an issue
- Reacted to an issue
- Commented on an issue
- Reacted to an issue's comments
- Starred a repo
- Forked a repo
-
Click
Remix
on the top-right to get a copy of it -
Set up a Clay workbook with a
Webhook
column -
Copy your Clay workbook's
Webhook
URL -
Set that as
CLAY_WEBHOOK_URL
in this val'sEnvironment variables
on the left sidebar -
In config.ts, replace
GITHUB_REPO
with the full repository name (format:"owner/repo"
) -
And that's it! The cron will run on your repo every 30 minutes from now on. To test it out immediately, navigate to main.ts and click
Run
.
- config.ts - Configuration settings for the scraper
- database.ts - SQLite database for tracking sent users
- clay.ts - Send data to Clay
- github.ts - Collects engaged users and fetches usernames
- main.ts - Cron trigger that orchestrates the scraping and Clay integration
This scraper automatically tracks which users have been sent to Clay to prevent
duplicates. The feature is controlled by the ENABLE_DEDUPLICATION
flag in
config.ts (enabled by default).
How it works:
- Users are stored in a SQLite database after being successfully sent to Clay
- On subsequent runs, the scraper checks the database and only sends new users
- The database persists across all cron runs, so users are only sent once
- Each user is tracked by their GitHub username and source repository
Database details:
- Users are tracked in the
tracked_users
table - Stores: username, source repo, and timestamp of first encounter
- The database automatically initializes on the first run
To disable deduplication:
Set ENABLE_DEDUPLICATION = false
in config.ts to send all
engaged users on every run (useful for testing or if you want to manage
deduplication in Clay instead).
On larger repos, you may get rate-limited by GitHub. To mitigate this, Val Town
uses a proxied fetch that reroutes requests
using a proxy vendor so that requests obtain different IP addresses. It also
automatically retries failed requests several times. Note that Note that using
std/fetch
will be significantly slower than directly calling the Javascript
Fetch API due to extra network hops.