How Not to Be Wrong cover

How Not to Be Wrong

by Jordan Ellenberg

How Not to Be Wrong by Jordan Ellenberg reveals how mathematics is a powerful tool for understanding the world and avoiding errors. By applying mathematical thinking, readers can make better decisions in daily life, understand research findings, and see through misleading statistics.

The Power of Mathematical Thinking

What does it mean to think mathematically in everyday life? In How Not to Be Wrong, Jordan Ellenberg argues that mathematics is not primarily about numbers—it’s about habits of mind. He shows you that math is a way to avoid intuitive traps, to see curves where others see straight lines, to notice missing data, and to tell correlation from causation. Math, for Ellenberg, is your everyday defense against being deceived—by others, by data, or by your own instincts.

From Linearity to Curvature

We often assume simple proportionality: if some is good, more must be better. The book opens by dislodging this linear fallacy. Ellenberg shows how Arthur Laffer’s napkin sketch—the now-famous Laffer curve—revealed that tax rates and revenue are linked nonlinearly; both extremes yield less than the midpoint optimum. Likewise, Abraham Wald’s World War II data on bullet holes showed that armor should go where holes are missing, not where they’re visible. Those insights expose the danger of one-dimensional thinking: no straight-line formula guides complex systems.

The Geometry of Uncertainty

Ellenberg then takes you through the geometry that underlies reasoning itself. Probability, limits, and correlation are not merely statistical tools—they’re ways of giving shape to doubt. From Zeno’s paradox to Cauchy’s rigor, you learn how the concept of limits transforms infinite processes into stable truths. This grounding lets you think clearly about randomness and variation—whether you’re interpreting an obesity trend, a lottery payoff, or the uncertainty in medical trials.

When Data Deceive

The middle chapters turn to bias and inference. Ellenberg introduces you to survivorship bias (Wald’s missing planes), selection bias (Berkson’s fallacy), and regression to the mean (Galton’s heights). Each is a way data disguise themselves. If you only see success stories or hospital patients, your sample lies. If you celebrate a star’s early triumph, expect regression. Seeing these biases together cultivates counterfactual thinking—the habit of asking, “What didn’t I see?” or “Where are the missing planes?”

The Tools of Inference

Next comes statistical inference. P-values are useful but often misunderstood; they answer the wrong question. Bayesian reasoning provides the right one: how to update your beliefs given evidence and background knowledge. Ellenberg blends classical and Bayesian logic to show how priors matter in evaluating medical tests, terrorist lists, or even algorithmic predictions. The book argues for a rational humility—probabilities reflect what you know and don’t know, not what’s absolutely true.

From Expected Value to Human Utility

Ellenberg’s stories—from the MIT Cash WinFall lottery to Bernoulli’s St. Petersburg paradox—use expected value to distinguish rational risk-taking from naive gambling. The math of average payoff isn't enough; you must factor in personal utility, risk tolerance, and ruin. The same reasoning informs policy: expected lives saved matters more than absolute certainty. Utility bridges the arithmetic of probability with the psychology of choice.

Geometry as Strategy

Finally, Ellenberg returns to geometry—Buffon’s needle, the Fano plane, and error-correcting codes—to show that deep mathematical forms govern luck, design, and communication. Hamming and Shannon’s theories of distance and information echo in lottery ticket systems and digital messages. The same structures that protect astronauts' signals can make gambling systems robust. Math, at its best, is always dual-use: abstraction that returns as power in the real world.

Core message

Ellenberg’s book ultimately teaches you that mathematics is thinking in slow motion—a practice of stepping back from appearances and seeing the structure beneath. Whether you’re reading polls, designing policies, or judging risks, math is not a set of formulas but a disciplined way of asking better questions.


Nonlinear Thinking and Hidden Curves

You often trust straight lines—assuming that doubling effort doubles reward. Ellenberg warns that reality curves. From tax rates to education funding, nonlinear relationships abound, and ignoring them fuels bad policy and bad reasoning.

The Laffer Curve and Policy Myopia

Arthur Laffer’s napkin sketch of tax revenue versus rate captured a crucial truth: extremes harm results. Raising or lowering taxes isn’t uniformly good or bad; the effect depends on where you start. When policy ignores curvature, outcomes surprise everyone. Ellenberg uses this political story to make the mathematical point vivid: beware simple slogans, ask how cause and effect bend.

Abraham Wald’s Lesson

In wartime, data showed bullet holes on returning planes concentrated in wings and fuselage. Officers wanted to armor those areas. Wald realized planes hit elsewhere didn’t return. His nonlinear question—“What aren’t we seeing?”—reversed policy. That’s the habit Ellenberg urges: before following a linear intuition, consider curvature and missing cases.

Everyday Curves

Public health, education spending, and risk management all display diminishing returns or thresholds. Increasing school budgets doesn’t linearly raise test scores; taxing calories won’t linearly shrink obesity. Ellenberg doesn’t demand complex math—just curiosity about shape. Ask: Is the curve hump-shaped, saturating, or tipping?

Insight

Whenever results depend on how far you’ve already gone, “more” or “less” aren’t fixed prescriptions. The right move depends on where you sit on the curve.


Seeing the Hidden Data

Statistical wisdom often begins by noticing what’s missing. Ellenberg builds on Abraham Wald’s insight to explain survivorship and selection bias—how invisible data distort what you believe is true.

Survivorship Bias

When you study success stories—top funds, popular businesses—you forget the failures cut from view. Wald’s question, “Where are the missing planes?” generalizes: What vanished before you could observe it? Finance, hiring, and media all glorify survivors; including the dead funds, the fired employees, or unpublished studies changes the story dramatically.

Selection Bias and Berkson’s Fallacy

Sometimes your sample selection itself creates false correlations. Berkson’s hospital paradox—high blood pressure and diabetes appearing negatively correlated among admitted patients—arises because the hospital admits those with either condition. Conditioning on a common effect fabricates patterns. Similar sampling illusions shape dating, hiring, and product reviews: if selection depends on variables, apparent relationships lie.

How to Guard Against Missingness

Ellenberg’s antidote is counterfactual curiosity. Ask for denominators, not just success counts. Question who gets recorded and who doesn’t. Use randomization or modeling to represent unseen cases. Missing data aren’t neutral—they recalibrate your view of truth.

Core principle

Before drawing conclusions, imagine the invisible—not just survivors but the unmeasured, the filtered, and the excluded.


Variation, Extremes, and the Law of Large Numbers

Extreme results seduce you—top schools, best players, smallest towns with huge rates. Ellenberg shows how small samples exaggerate randomness and how big samples calm it. That’s the Law of Large Numbers in life.

Why Small Samples Mislead

In small populations, probabilities swing wildly. South Dakota might lead in brain-cancer rate one year, North Dakota the next. Coin-flip experiments illustrate how variation shrinks as trials grow. Extreme results in tiny data sets rarely reflect real superiority.

Regression to the Mean

Francis Galton’s concept of regression formalized why extraordinary phenomena fade. The tallest parents have tall—but not equally tall—children. Secrist misread this as moral decline in business excellence; Hotelling corrected him. Luck elevates outliers temporarily; average performance regains dominance.

When to Trust Extremes

Ellenberg suggests using confidence intervals or hierarchical models to temper extremes. If you judge schools or funds, shrink estimates toward the mean. Randomization and replication prevent mistaking luck for causality. In sports, medicine, and business, dramatic early results often regress.

Guideline

Small samples amplify noise; large samples reveal pattern. Expect moderation—the universe naturally regresses toward average.


Inference: From P-Values to Bayes

Statistical inference is both powerful and fragile. Ellenberg explains how p-values, multiple testing, and Bayesian reasoning form the modern logic of scientific evidence—and how misunderstanding them fuels false conclusions.

What P-Values Really Mean

A p-value answers: given no real effect, how unusual are these data? It’s not the probability the hypothesis is false. Misreading this leads to “significant” findings that aren’t reproducible, especially when many tests are run. Ellenberg’s Baltimore stockbroker parable and the dead-salmon fMRI illustrate how random chance can masquerade as discovery.

Bayesian Correction

Bayes’s theorem fixes the misinterpretation. It updates belief by combining prior probability with observed evidence. Applied to terrorist screenings or medical diagnostics, it reveals why even accurate tests produce mostly false alarms when the event is rare. Priors matter; rarity dominates probability.

Reforming Evidence

Ellenberg applauds movements like preregistration and large-scale replications. Science improves not by forbidding error but by quantifying uncertainty. Bayesian reasoning and corrected significance free inquiry from arbitrary thresholds.

Key idea

Evidence isn’t binary—it updates belief. Trust inference tools only when you understand what question they’re answering.


Expected Value, Utility, and Rational Risk

If math helps you reason under uncertainty, it must also help you navigate risk. Ellenberg’s intertwining of lotteries, paradoxes, and utility theory shows how rational decision-making depends on expectations and values—not just outcomes.

The Cash WinFall Case

MIT students exploited the Massachusetts Cash WinFall lottery’s roll-down rule. On jackpot roll-down days, expected value turned positive—statistically profitable if you bought enough tickets. That strategy wasn’t luck; it was calculation. Ellenberg shows how computing expected return reveals opportunities and warns against volume risk: the more you play, the more you must manage variance.

The Utility Revolution

Bernoulli’s St. Petersburg paradox proved arithmetic average fails when risk and ruin loom large. Introducing concave utility—the diminishing subjective value of wealth—fixed the paradox. Ellenberg uses this logic to connect everyday risk-taking to lotteries, investments, and public policy decisions.

Choosing Wisely

You learn to treat expected value as one layer and personal utility as another. A logical decision maximizes long-term welfare, not one-shot gain. The rational gambler knows when not to play; the rational policymaker uses expected lives saved to balance uncertain tradeoffs.

Essence

Smart use of probability requires both arithmetic precision and human judgment—expected value tells you gain; utility tells you if it’s worth it.


Geometry of Information and Design

Ellenberg’s closing chapters turn geometry into a toolkit for understanding chance, design, and data. You learn how scatterplots, finite planes, and error-correcting codes combine the visual and logical sides of math.

Seeing Correlation as Geometry

Francis Galton’s invention of the scatterplot made relationships visible. Ellenberg teaches you to read ellipses: tilt indicates direction, thinness indicates strength. Correlation becomes an angle between data vectors. This geometric lens clarifies why correlation isn’t transitive and why redundancy reduces information—in measurements, pixels, or voting data.

Buffon to Fano: Chance Meets Structure

Dropping needles to estimate π connects geometry and probability. The Fano plane—a seven-point finite geometry—shows how combinatorial design can optimize lottery tickets or error-correcting codes. These examples demonstrate how spatial reasoning can solve problems of randomness.

Error-Correcting Elegance

Richard Hamming’s insight—distance between code words protects information—embodies robust design. Each message becomes a point in multidimensional space. Claude Shannon proved random codes often suffice; Denniston's lottery system applied similar geometry to reduce risk. The link from sphere-packing to ticket selection illustrates math’s universality: structure preserves reliability.

Ultimate lesson

Geometry doesn’t just describe space—it organizes information, reduces error, and sharpens intuition. When you see data as shapes, patterns become visible and solvable.


Causation, Choice, and the Limits of Knowledge

The final theme concerns truth itself—how we infer causation, choose collectively, and define certainty in logic. Ellenberg links medical inference, voting paradoxes, and Gödel’s incompleteness to show that perfect rationality is impossible but structured reasoning is still trustworthy.

From Correlation to Causation

Smoking’s link to cancer, Fisher’s skepticism, and niacin’s failed promise underline how correlation invites but doesn’t prove causation. You act wisely only when multiple lines of evidence converge. In policy, you often decide under uncertainty—expected value replaces absolutes when randomized trials are forbidden.

Voting and Paradox

Condorcet cycles and Borda counts reveal democracy’s mathematical fragility: collective preferences can contradict themselves. From Florida 2000 to Burlington 2009, procedural rules, not public will, determined outcomes. Ellenberg argues you must choose your preferred paradox—the fairness values you prioritize.

Foundations and Limits

Mathematics itself isn’t free from paradox. From Euclid’s parallel postulate to Gödel’s incompleteness, formal systems expose inherent limitations. Hilbert sought total assurance; Gödel proved you can’t have it. Yet those boundaries empower creativity: non-Euclidean geometry, like policy and democracy, thrives by changing axioms.

Closing insight

Uncertainty is not failure—it’s the natural boundary of knowledge. Mathematical reasoning doesn’t eliminate doubt; it systematizes it so you can think, choose, and act intelligently.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.