The Base Rate Fallacy: Why We Ignore the Most Important Number
A doctor tells you that you've tested positive for a rare disease. The test is 95% accurate, she explains. Should you panic? Most people would. But before you do, consider this: if the disease affects only 1 in 1,000 people, your actual chance of having it—even with a positive test—is less than 2%. This counterintuitive result reveals one of the most consequential errors in human reasoning: the base rate fallacy.
What Base Rates Are and Why We Ignore Them
The base rate is simply how often something occurs in a population. It's the fundamental probability before you consider any additional information. In the medical example above, the base rate is 1 in 1,000—that's how many people actually have the disease.
When we make judgments under uncertainty, we should start with the base rate and then adjust based on new evidence. Instead, we do the opposite: we fixate on the vivid, specific information ("95% accurate test!") and ignore the boring statistical backdrop. Psychologists Amos Tversky and Daniel Kahneman identified this pattern in the 1970s, demonstrating that even trained professionals systematically neglect base rates when making probability judgments.
The mathematics is straightforward but our intuition rebels against it. With a 95% accurate test and a 1-in-1,000 disease, here's what happens to 10,000 people: 10 actually have the disease (9.5 test positive), while 9,990 are healthy (but 500 falsely test positive). The positive test result puts you in a pool of about 510 people, of whom only 9 or 10 actually have the disease. Your odds: roughly 1.9%.
The Prosecutor's Fallacy
In 1990, a British woman named Sally Clark was convicted of murdering her two infant sons. The prosecution's expert witness testified that the chance of two children in the same family dying of Sudden Infant Death Syndrome (SIDS) was 1 in 73 million. The jury found her guilty.
The reasoning was backwards. The relevant question wasn't "What's the probability of two SIDS deaths?" but rather "Given two infant deaths in one family, what's the probability they were murders versus natural causes?" To answer that, you need the base rate: how often do two children in the same family die of SIDS, and how often do mothers murder two children? SIDS, while rare for each individual child, is far more common than double infanticide. Sally Clark spent three years in prison before statisticians successfully challenged the conviction. She was released in 2003.
Key Takeaways
The base rate fallacy reveals how our minds privilege vivid, case-specific information over dull statistics. To reason more accurately under uncertainty:
- Always ask for the base rate first. Before evaluating specific evidence, establish how common the phenomenon is in the relevant population.
- Remember that rare events stay rare. Even strong evidence (like a 95% accurate test) can't make uncommon outcomes likely when the base rate is very low.
- Use concrete numbers, not percentages. Thinking about "10 out of 10,000" rather than "0.1%" makes the mathematics more intuitive and helps bypass our cognitive blind spot.
The Daily Habit
The next time someone presents you with alarming individual evidence—a positive test, a suspicious pattern, a seemingly meaningful coincidence—pause and ask: "How often does this occur in general?" That simple question transforms how you evaluate risk, make medical decisions, and assess unusual events. The most important number in probability is often the one nobody mentions.
References
- Kahneman, D., & Tversky, A. (1973). "On the psychology of prediction." Psychological Review, 80(4), 237-251.
- Gigerenzer, G. (2002). Calculated Risks: How to Know When Numbers Deceive You. Simon & Schuster.
- Hill, R. (2004). "Multiple sudden infant deaths—coincidence or beyond coincidence?" Paediatric and Perinatal Epidemiology, 18(5), 320-326.
- Bar-Hillel, M. (1980). "The base-rate fallacy in probability judgments." Acta Psychologica, 44(3), 211-233.