The Wrong Features
Platypus & Fox | No. 1
There is a particular kind of failure in predictive modeling that is easy to miss and hard to forgive. It doesn’t announce itself. The accuracy metrics look clean. The model validates. The math checks out. And yet when you actually deploy it, the thing fails at the exact moment you need it most.
It’s called incorporation bias. Once you learn to see it, you can’t stop finding it.
The Sepsis Problem
In medicine, we are obsessed with prediction. If we can identify the sick patient before they crash, we can intervene earlier, do less harm, save more lives. The rise of electronic medical records and AI has given us mountains of data and a genuine opportunity to find those early signals. It has also given us new and creative ways to fool ourselves.
Schertz1 and colleagues identified a striking problem with the EPIC Sepsis Model, one of the most widely deployed clinical prediction tools in American hospitals. The model’s accuracy improved substantially when it incorporated two variables: whether a physician had ordered blood cultures, and whether they had ordered antibiotics.
On the surface this seems reasonable. Ordering blood cultures and antibiotics are things clinicians do when they’re worried about sepsis. Surely that clinical behavior is a useful signal?
The problem is that it’s the wrong kind of signal. Physicians order blood cultures and antibiotics when they have already recognized sepsis, or are already treating for it. The model incorporated features that are downstream of the diagnosis. It was trained to identify patients who were already being treated, not patients who needed to be found.
The sepsis score became most accurate precisely when it was least useful. The model looked confident. The model was broken.
That’s incorporation bias. It describes a predictive model that includes features caused by the outcome, correlated with it through a path that bypasses the actual mechanism you care about. The accuracy is real. The utility is an illusion. You’ve built a self-fulfilling prophecy into the math, and it takes a careful eye to catch it.
The Same Bias, Closer to Home
I’ve been watching for this pattern in clinical data for years. It took me considerably longer to notice it in myself.
Some mornings everything arrives at once. I was driving two of my kids to school, switching back and forth between Critical Role and the Daily Stoic, which is a chaotic way to absorb ideas but an honest one. The Stoic episode was on enough: the idea that failing to define sufficiency is itself a kind of excess. Not just in possessions or ambition, but in commitments. The promises we make to others and, harder to track, the ones we make to ourselves.
Somewhere under both of them was something my wife had said the week before that I hadn’t quite landed on yet. “You need to reframe your sense of time. You say yes to things without thinking about what you’ve already committed to. It’s like you’re committing because of something else entirely.”
Something else entirely. Not time. Not capacity. Not an honest look at what the next three months actually hold. Something else was driving the yes, and that something was being folded into the decision in a way that made the outcome look more certain, more beneficial, more achievable than it really was.
The skeptical reader might ask whether this is just confirmation bias in disguise. It isn't, though the two are easy to conflate. Confirmation bias is about how we weigh evidence — we seek information that supports what we already believe. Incorporation bias is structural: it's about which features belong in the model at all. The problem isn't that I'm ignoring contrary evidence about my capacity. It's that I'm using a feature — how good the finished thing will feel — that comes from completing the commitment, not from anything present in the moment of deciding. That future state isn't a signal. It's an outcome dressed up as one, and folding it into the decision is exactly what makes the sepsis score look accurate when it shouldn't.
The Multiclass Problem
Critical Role is, among other things, a very long master class in decision-making under constraints. That’s probably why it hit differently that morning.
In Dungeons and Dragons, players can multiclass, picking up skills from a second or third character class alongside their primary one. A Fighter who dips into Bard gains charisma skills, some spellcasting, and inspiration dice. At level 3, it feels like pure upside. Look at all those new abilities.
What the excitement obscures is what stops compounding in the background. A pure Fighter gets Extra Attack at level 5. A Fighter who multiclassed at level 3 gets there considerably later, if ever. Every level spent in a new class is a level not deepening the first. Multiclassing isn’t inherently wrong. Sometimes the breadth is worth it. But the cost is real and it tends to arrive quietly, long after the decision felt obvious.
The sepsis model failed because it included features it shouldn’t have. The multiclassing decision fails because it ignores the feature it should. Both errors produce the same result: a model that appears confident but performs poorly. What gets incorporated is the excitement of new abilities. What gets left out is the compounding cost of depth foregone — the thing that actually predicts whether the choice holds over time.
I have been multiclassing my life for years. Physician. Epidemiologist. Vice Chair. Scout leader. Rower. Substack writer. App builder. Podcast host. Father. Spouse. Handyman. Son. Some of this is just being human. Some of it was a choice, and each of those choices felt like an upside at the moment I made it. When asked to take something on, I focused on the reward: the finished thing, the problem solved, the version of myself on the other side. I was not making my decision based on what I actually had available at that moment.
My wife saw this more clearly than I did. She didn’t need the language of predictive modeling. You’re committing because of something else. The something else was the incorporated outcome, the future state pulled forward into the present decision, dressed up as a feature.
My Spirit Animals
When I start rounds with a new team, I ask them what their spirit animal is. Mine has shifted over time, but two have stuck: the platypus and the fox.
The platypus is one of nature’s great anomalies. It is built from parts that seem like they shouldn’t work together and yet somehow do. It nurses young like a mammal but lays eggs. It has a beaver’s tail, an otter’s feet, and a bill that belongs to no category anyone was prepared for when it first arrived in European scientific literature. Naturalists famously thought it was a hoax. But the strangest part isn’t the anatomy. The platypus hunts in complete darkness, eyes closed, locating prey through electroreception, sensing electrical fields invisible to everything else in the water. It doesn’t see the signal. It feels it, through a mechanism tuned entirely to what is actually present rather than what it expects to find. It doesn’t project. It doesn’t incorporate the hoped-for meal into its search. It reads what’s there.
The fox works differently but arrives at something similar. It doesn’t overpower. It reads conditions as they are, not as it wishes them to be. The fox is patient and precise, moving through what’s genuinely available rather than charging toward what it imagines. There’s a reason Isaiah Berlin2 borrowed the fox as a symbol for a particular kind of intelligence: the kind that holds many things at once without forcing them into a single grand theory. The fox knows how to work with complexity without being undone by it.
Together, they represent something I keep coming back to. Find the signal that is actually present. Move with what is genuinely there. Don’t let the anticipated reward corrupt the read.
The Better Model
That’s the work this publication is built around. How do we find a signal in noise? How data becomes knowledge and knowledge becomes something we can actually live by. In medicine. In our bodies. In the ordinary decisions that quietly shape a life. The sepsis example and the morning in the car are not as far apart as they seem. That’s the point.
In sepsis prediction, fixing incorporation bias means building on physiology3, the patient’s current state, not on treatment decisions already downstream of the diagnosis. You have to be ruthless about which direction causality runs.
In your own decision-making, the fix is asking not how good will this feel when it’s done but what do I actually have right now. Time, energy, depth of attention, commitments already drawing on those reserves: these are the features that predict whether a yes will hold. Identity, “I am someone who helps, who builds things, who says yes,” is not a feature. The imagined reward isn’t either. These are incorporation bias, confident and invisible, quietly degrading the model from the inside.
The Stoics called the corrective enough. Not a ceiling on ambition but a discipline of honest accounting. Know what you actually have. Build your model from that.
My wife knew this already. It took a sepsis score, a Dungeon Master, and a dead philosopher to help me catch up.
Platypus & Fox is about finding signal in noise: in medicine, in data, in the examined life. If this landed, share it with someone else who’s running too many character classes.
Shertz et al. Sepsis Prediction Model for Determining Sepsis vs SIRS, qSOFA, and SOFA2023;6;(8):e2329729. doi:10.1001/jamanetworkopen.2023.29729
Iseah Berline The Hedgehog and the Fox: An Essay on Tolstoy’s View of History - Second Edition. 2013 Pricton University Press
T. Moss, …, K. Enfield et al Signatures of Subacute Potentially Catastrophic Illness in the ICU: Model Development and Validation Crit Care Med .2016 Sep;44(9):1639-48.


