There's a $15 microcontroller on my desk that shows me a single number every morning. It's how many burpees I should do today. The number was chosen by a logistic regression model that trained on my own history at 7am, swept goals from 1 to 50, and picked the one that maximizes expected value. If I disagree with it, I can push back—and the goal actually changes.
If this seems like a lot of infrastructure for doing some jumping jacks—yeah, it is. But I actually don't think it's enough.
Consider the other side. Entire product teams, backed by billions in venture capital, spend their careers figuring out how to get me to tap one more thing. The notification timing. The streak I'd lose. The variable-ratio reinforcement schedule disguised as a feed. They are extremely good at this, and they do not have my interests in mind.
I can't out-spend them. But I can at least run a counter-campaign—by making the one thing I actually care about as visible, as easy, and as hard to ignore as the things they want me to do. I'm not building a habit tracker. I'm running an ad campaign. The product is burpees. The audience is me.
I used to spread myself across a dozen habits. Which meant I was always behind on something, and nothing got the investment it needed to actually stick.
Now I have one keystone metric: VO₂ max. It's one of the strongest predictors of all-cause mortality—basically how long and how well you live. High-intensity interval training is the highest-leverage way to move it. So: burpees.
But there's a tension. Burpees are yang—explosive, taxing, risky if you go in cold. I need yin to balance. Specifically I need sun salutations to warm up and prevent injury. And historically, the yin gets in the way of the yang. If I have to do 20 minutes of yoga before I can start, that's friction. Friction kills habits.
So I made the system absorb its own prerequisite: five sun salutations count toward the burpee goal. The yin is subsumed by the yang. Yoga is just slower burpees. One number.
Most goal systems are brittle. Miss a day, the streak breaks, you feel bad, you stop. I wanted something with structural integrity. After months of iteration, I think a goal worth respecting needs five properties:
Most systems silently absorb failure. Nothing happens. The number stays the same. Over time this teaches you that missing is fine—which, in one sense, it is. But it also means the system has no opinion. It doesn't care.
Beeminder does the opposite. Every goal hit posts a datapoint on a chart with a required slope—a "yellow brick road." Fall below and you lose real money.
The important thing isn't punishment. It's that there's always a next day, and the road is always there, and the question is always just: are you on it? The system has an opinion about every single day, including the ones you missed.
On my side, when I miss, the logistic regression model recalibrates. The goal drops. The increment adjusts. No guilt spiral—just a new number that accounts for reality. The system is never surprised by failure.
An arbitrary number gets ignored. A number you can always negotiate away isn't a goal. The goal has to feel like it knows something.
Every morning, my model sweeps goals 1–50, computes P(success) × goal for each, and picks the peak expected value. It trains fresh on all history with ~30 features: streak direction, rolling success rates, day-of-week cyclicality, effort ratios, how close I got on days I missed.
The model is sometimes annoyingly right about bad days. When it gives me a lower number than I expected, I've learned to pay attention. Try it—hit Done or Skip below and watch how the goal responds over time:
https://dcm31--22eabcfe1a4311f1953c42dde27851f2.web.val.run
This is the newest piece and the one I'm most excited about.
When the model proposes a goal, I submit my own probability estimate. "I think there's a 75% chance I hit this today." That prediction gets stored as a feature in the model. The model re-trains with it. And the goal can actually shift. If I'm more confident than the model—slept well, have time, feeling good—the goal nudges up. If I'm doubtful, it pulls back.
This makes the goal a negotiation. My gut has skin in the game.
https://dcm31--9cc8f2ac1b2911f18fb042dde27851f2.web.val.run
Each round of negotiation creates a new row in a predictions table: (date, round, goal_shown, model_prob, charlie_prob). Those columns become features for future training. The model literally learns from how I felt about its suggestions—and whether my feelings were accurate.
Both predictions—mine and the model's—each create a separate question on Fatebook: "If my goal is 32, will I complete 32 burpees on 2026-03-08?" They auto-resolve YES or NO the next morning from the database.
Over time this builds two calibration tracks. The model's track: are its probabilities realistic? My track: do I actually know myself? Am I overconfident on Saturdays? Do I underestimate the momentum of a good week?
Not enough data yet to draw real conclusions. But the infrastructure is in place—and the data is accumulating whether I think about it or not.
This is the underrated property. A goal you only see when you open an app is a goal that loses to Instagram. The system needs surface area—for both knowing what the goal is and for logging that you did it.
Here's my current inventory:
Knowing the goal:
Every morning at 7am, a Telegram message drops into my inbox with the number. It sits there all day. The Apple Watch shows it as a complication I see every time I check the time. An iOS lock screen widget shows it. And the Atom Matrix on my desk glows it in red LEDs—a Cistercian numeral on a 5×5 grid—with the bottom row showing a 5-dot streak history in green and red.
Logging that you did it:
Finish a workout on Apple Watch → workout completion fires an Apple Shortcut → one tap marks it done. Zero extra steps. The data flows: Val Town endpoint → SQLite → Beeminder → Fatebook resolution. If I did fewer than the goal, I can log the actual count—the model learns from effort ratios, not just binary pass/fail.
Probably not. Think about what I'm up against.
The corporations competing for my attention have compounding advantages: more data, more engineers, more psychological research, more feedback loops, and fundamentally more dollars. They have made it their business—literally their business model—to influence how I spend my time.
The only asymmetric response is to be more intentional about my own feedback loops. To make the one thing that actually matters for my health as salient and frictionless as whatever they want me to click next.
One metric. One keystone habit. Every surface it can reach.
The whole system runs on Val Town. The logistic regression is from scratch—no libraries. Predictions track on Fatebook. The commitment device is Beeminder. The display is an M5Stack Atom Matrix (ESP32). Health data flows from Apple Watch via Apple Shortcuts.