Every morning at 7am, a machine on Val Town wakes up, trains a logistic regression on my history, and tells me how many burpees to do. The number appears on my watch, in my Telegram, on a glowing LED cube on my desk, and on a web dashboard I built. If I disagree with the number, I can argue back, and the number actually changes.
Gee, this seems like a lot.
Actually—I don't think it's enough.
Think about the other side of the ledger. Entire product teams, backed by billions in venture capital, spend their careers figuring out exactly how to get me to tap one more thing. The notification timing. The color of the button. The streak I'd lose if I stop. The variable-ratio reinforcement schedule disguised as a feed. They are extremely good at this. They have PhDs and A/B testing infrastructure and literally more money than God.
And what do I have fighting for the things I actually care about? A vague sense of guilt? A notes app reminder I'll swipe away?
No. If I'm going to take myself seriously, the things that matter need at least as much investment in engagement as the things that don't. I need to run an ad campaign. The product is burpees. The audience is me.
I used to spread myself across a dozen habits—which meant I was always behind on something and nothing got enough attention to actually work. Now I have one keystone metric: VO₂ max, one of the strongest predictors of how long and how well you live. High-intensity interval training is the highest-leverage way to move it. Burpees are HIIT you can do in your living room with no equipment.
But there's a yin-yang problem. Burpees are explosive—pure yang. Do them cold and you get hurt. I need sun salutations first to warm up. But historically the yin blocks the yang: if I have to do 20 minutes of yoga before I even start, that's friction, and friction kills habits.
So I made the system eat its own prerequisite. Five sun salutations count toward the burpee goal. The warm-up is subsumed by the goal itself. Yin and yang, one number.
The goal isn't arbitrary and it isn't fixed. A logistic regression model trains fresh each morning on all my history—31 features including streak direction, rolling success rates, day-of-week cyclicality, effort ratios, how close I got on days I missed. It sweeps goals 1–50, computes P(success) × goal for each, and picks the peak expected value.
The model is sometimes annoyingly right about bad days. When it gives me a lower number than I expected, I've learned to pay attention.
Two things make this different from a fixed-number goal app. First, when I miss, the model already has an opinion about tomorrow. The goal drops. The increment adjusts. There's no guilt spiral and no broken streak—just a new number that accounts for what happened. The system is never surprised by failure.
Second—and this is the piece I'm most excited about—I can argue back.
https://dcm31--22eabcfe1a4311f1953c42dde27851f2.web.val.run
When the model proposes a goal, I submit my own prediction: "I think there's a 75% chance I hit this." That prediction gets stored as a feature. The model re-trains with it. And the goal can actually shift.
https://dcm31--9cc8f2ac1b2911f18fb042dde27851f2.web.val.run
If I'm more confident than the model—slept well, have time, feeling strong—the goal nudges up. If I'm doubtful, it pulls back. Each round creates a row in a predictions table: date, round, goal shown, model probability, my probability. Those columns become features for future training. The model literally learns from how I felt about its suggestions—and whether my feelings were accurate.
This makes the goal a negotiation. Not between me and an app I can always dismiss, but between me and a model that has seen every day I've ever tracked, including the days I lied to myself about being motivated.
Both predictions—mine and the model's—then get posted to Fatebook as real forecasting questions: "If my goal is 32, will I complete 32 burpees on 2026-03-08?" They resolve YES or NO the next morning, automatically, from the database.
Over time this builds two calibration tracks. Am I actually accurate when I say 70%? Is the model? Do I get overconfident on Saturdays? Does the model underestimate good weeks?
Not enough data yet to draw conclusions. But the infrastructure is accumulating data whether I think about it or not.
Here's the part most people skip. You can have the smartest model in the world, but if the goal only exists when you open an app, it loses to Instagram. The system needs surface area—for both knowing what the goal is, and for logging that you did it.
So I went wide.
Ambient awareness. Every morning at 7am a Telegram message drops with the number. It sits in my inbox all day.
The Apple Watch shows it as a complication—every time I check the time, I see the goal. An iOS lock screen widget shows it too. And on my desk, an M5Stack Atom Matrix—a $15 ESP32 with a 5×5 LED grid—displays the goal as a Cistercian numeral. The bottom row shows my last 5 days as colored dots: green for hit, red for miss.
It doesn't vibrate. It doesn't ping. It just glows. A quiet little billboard from me, to me, about what matters today.
Zero-friction logging. Finish a workout on Apple Watch → completion triggers an Apple Shortcut → one tap → done. The data flows: Val Town endpoint → SQLite → Beeminder → Fatebook resolution. If I did fewer than the goal, I can log the actual count. The model learns from effort ratios, not just binary pass/fail.
Real stakes. Beeminder puts money on the line. Every goal hit posts a datapoint on a chart with a required slope—a "yellow brick road." Fall below and you pay.
Four surfaces for seeing the goal. One tap for logging it. Money for failing. That's the campaign.
Look, I know how this reads. A logistic regression model for jumping jacks. Cistercian numerals on a microcontroller. Prediction markets against yourself. It's a lot.
But I keep coming back to the asymmetry. The corporations competing for my attention have compounding advantages: more data, more engineers, more psychological research, more dollars. They have made capturing my behavior their literal business model.
The only asymmetric response is to be more intentional about my own feedback loops. To fight back by making the one thing that matters as salient and frictionless and engaging as the things that don't.
One metric. Every surface it can reach.
Built on Val Town. Logistic regression from scratch—no libraries. Predictions on Fatebook. Commitment device: Beeminder. Display: M5Stack Atom Matrix (ESP32). Health data: Apple Watch → Shortcuts → Val Town.
This section is for the nerds. Everything above is the philosophy — this is the plumbing.
Val Town is a platform where you write TypeScript functions and they become instantly-deployed HTTP endpoints, cron jobs, or importable modules. No infra. No deploy step. You write code, it's live. The entire system described in this post is ~6 vals (TypeScript projects) running on Val Town's free tier with SQLite for persistence and blob storage for state.
This matters because the system only works if it's easy to modify. I've rewritten the goal model, the pipeline logic, and the display integration dozens of times over months. Each change was "edit code, save, it's live." If this required a deploy pipeline I would have stopped iterating months ago.
No TensorFlow, no scikit-learn, no dependencies at all. The model is ~50 lines of TypeScript: gradient descent with L2 regularization, z-score normalization, sigmoid output. It trains fresh every morning on all history — currently ~120 days of data — in under 100ms.
The training loop is textbook:
for 500 iterations:
for each training example:
z = dot(weights, features) + bias
prediction = sigmoid(z)
error = prediction - actual
update weights via gradient + L2 penalty
L2 regularization (λ=0.02) is important because with 30+ features and ~120 training examples, overfitting is a real risk. Without it the model memorizes noise — "you always fail on Tuesdays when your streak is exactly 3" — instead of learning genuine patterns.
I extracted this into a remixable val: dcm31/ev-goal-optimizer. Import recommend() with your own history and get back the optimal goal. Zero dependencies.
The model's feature vector isn't just "goal" and "day of week." It includes:
Goal context: the candidate goal, yesterday's goal, the delta between them, 7-day moving average of goals. The model learns that big jumps from yesterday are riskier than gradual increases.
Performance history: previous day's achieved (0/1), signed streak (positive = consecutive hits, negative = consecutive misses), 7-day and 14-day rolling success rates. Momentum is real — a 5-day hit streak predicts tomorrow differently than a fresh start.
Effort features: yesterday's effort ratio (actual/goal, capped at 1.5), 7-day average effort. Crucially, this means the model learns from partial performance. Logging 18 burpees against a goal of 22 is different from logging 0 — even though both are "misses."
Gap features: how far above or below the goal you were, as a ratio. 7-day average gap. This captures whether you're consistently close-but-missing vs not-even-trying.
Day of week: both as one-hot encoding (7 binary features) and as cyclic sin/cos encoding. The one-hot lets the model learn "Saturdays are different." The sin/cos encoding captures the continuous cycle so that Sunday and Monday are "close" rather than numerically distant.
Prediction features (dynamic): when I submit my own prediction, it gets stored in a predictions table with a round number. The model then includes charlie_pred_1, model_pred_1, goal_at_1 (and round 2, 3... if I negotiate multiple times) as features. This is the feedback loop where my subjective confidence literally changes the model's output.
The key insight: you can't just pick the goal where P(success) is highest — that's always goal=1. And you can't just pick the highest goal — that's always goal=50 with P≈0. Instead, sweep all candidates and compute P(success) × goal for each:
for goal = 1 to 50:
features = buildFeatures(history, goal, today)
prob = sigmoid(dot(weights, features))
ev = prob × goal
pick max(ev)
The EV curve typically has a single peak. Below the peak, you're leaving burpees on the table (easy goal, low value). Above the peak, you're being unrealistic (high value but probability collapses). The peak is where the model thinks you'll get the most done.
When I submit a prediction via GET /api/predict/75, here's what happens:
- Lock today's goal if not already locked (this creates the model's Fatebook prediction)
- Store my prediction in the
predictionstable:(date, round, goal_shown, model_prob, charlie_prob) - Delete my previous Fatebook prediction for today (if I'm re-predicting)
- Create a new Fatebook prediction with my probability, tagged
charlie-prediction - Re-run the model with my prediction now included as a feature
- If the model's new best goal differs from the locked goal → delete both Fatebook predictions, create new ones for the new goal, update the lock
The re-run in step 5 is the magic. My prediction becomes training signal for the model. If historically, days where I predicted 80% turned out well, the model learns to trust my confidence. If I'm systematically overconfident, it learns to discount me.
Val Town runs in UTC. I'm in São Paulo (UTC-3). The daily cutoff is 7am São Paulo time (= 2am PST, = 10am UTC). The ymd() function converts "now" to a date by offsetting to UTC-3 and rolling back a day if it's before 4am local:
function ymd(d: Date): string {
const offset = -3 * 60;
const local = new Date(d.getTime() + offset * 60 * 1000);
if (local.getUTCHours() < 4) {
local.setUTCDate(local.getUTCDate() - 1);
}
return /* YYYY-MM-DD */;
}
The yesno pipeline val used a different cutoff function (7am via toLocaleString). For weeks, doing burpees at 11pm São Paulo time would log to the wrong date in one system but the right date in the other. Streak dots would show red when they should be green. The fix was making the burpee-yoga-routine pass its own state.date to the burpees API instead of letting the API compute "today" independently.
The M5Stack Atom Matrix is a $15 ESP32 dev board with a 5×5 RGB LED grid. An Arduino sketch polls a Val Town endpoint every 30 seconds, gets a JSON payload of 25 hex color strings, and sets the LEDs accordingly.
The display uses Cistercian numerals — a medieval number system where a single glyph on a vertical staff encodes 1–9999 using four quadrants. The center column is always lit (the staff). Units go top-right, tens top-left, hundreds bottom-right. So "32" lights specific cells in the tens quadrant (3) and units quadrant (2). It's surprisingly readable once you learn it.
The bottom row is overridden with streak dots: 5 LEDs showing the last 5 days, green for hit, red for miss. This means the Cistercian numeral for the goal and the recent history are both visible in a single glance at a 5×5 grid.
The yesno val implements a generic habit pipeline. Each habit has two phases: PREDICT (optional) and CONFIRM. The pipeline steps through them sequentially, with auto-advance for habits that skip prediction.
For the burpee habit, the predict step auto-skips (the burpees val manages its own Fatebook predictions). The confirm step checks if burpees were already logged today — if so, it auto-completes. If not, pressing YES on the device (or hitting the Apple Shortcut endpoint) fires onConfirmYes, which calls the burpees API's /api/done.
A cron job runs after the daily boundary to auto-resolve any Fatebook predictions that were never confirmed — those resolve as NO.
The logging path is: Apple Watch workout completion → runs an Apple Shortcut → Shortcut makes a POST to the burpee-yoga-routine val's /api/advance endpoint → the routine state machine advances → when it hits the burpee confirm step, it calls /api/done on the burpees val → SQLite updated, Beeminder logged, Fatebook resolved.
A separate nightly Shortcut reads VO₂ max from Apple Health and POSTs it to the yesno val's /ingest endpoint, where it's stored in blob storage as a time series.
The whole chain — from finishing a workout on my wrist to updating a prediction market and a commitment contract and a physical LED display — takes about 2 seconds.
All the vals referenced in this post:
- dcm31/burpees — the core tracker: logistic regression, goal locking, Fatebook integration, Beeminder logging, web UI
- dcm31/yesno — habit pipeline state machine, Apple Shortcuts endpoints, VO₂ max ingestion
- dcm31/burpee-yoga-routine — Atom Matrix display orchestration, phase cycling (yoga → burpee → done)
- dcm31/atomMatrix — LED state management, Cistercian numeral rendering
- dcm31/ev-goal-optimizer — generic remixable version of the logistic regression + EV sweep (no burpee-specific code)