Calling Bullshit cover

Calling Bullshit

by Carl T Bergstrom, Jevin D West

Calling Bullshit is an incisive guide for navigating misinformation in our data-driven world. Authors Bergstrom and West reveal how to identify and challenge manipulated data, encouraging readers to discern truth in a sea of digital noise.

The Anatomy of Bullshit

How can you protect yourself from persuasive nonsense in an age of endless data and algorithmic spin? In Calling Bullshit: The Art of Skepticism in a Data-Driven World, Carl Bergstrom and Jevin West argue that bullshit—information presented without regard for truth—has evolved in both form and strategy. What began as rhetorical puffery is now turbocharged by statistics, algorithms, and media incentives that reward clicks over clarity. Their aim is not cynicism but empowerment: to teach you how to see through misleading claims, numbers, and visuals with the same ease that you spot a badly Photoshopped image.

The authors start from philosopher Harry Frankfurt’s insight that bullshit is speech produced without concern for truth. Unlike lies, which intend deceit, bullshitters care more about effect than accuracy. They range from corporate mission statements that sound profound but say nothing, to data-driven claims that use mathematical language to intimidate rather than inform. Bergstrom and West shift your focus from intent to structure: what makes some statements persuasive despite being vacuous or false.

From Rhetoric to Data Bullshit

Classical bullshit relied on grand phrases and weasel words (“up to 50% improvement,” “studies show”), leaving room for retreat. Today’s version embeds itself in graphs, p-values, and AI systems. Numbers appear objective but can be chosen, framed, or scaled to tell nearly any story. The shift from verbal spin to quantitative theater marks a dangerous evolution—because numbers, unlike words, carry an aura of authority that discourages questioning.

The authors show that you don’t need advanced math to unpack jargon-laden claims; you only need critical habits. Ask who picked the data, how representative it is, and whether the conclusions plausibly follow. Inquiry into assumptions often defeats statistical obfuscation faster than computation.

Why We Fall for It

Evolution built both our capacity to deceive and to be deceived. From the mantis shrimp’s bluffing stance to the raven that fake-stashes food, deception helps organisms survive. Humans simply upgraded deception with language and theory of mind. Much human bullshit, the authors note, serves a signaling function: you tell stories not just to inform, but to manage how others perceive your intelligence, virtue, or belonging. That instinct, when combined with modern communication platforms, produces a flood of performative discourse focused on attention rather than truth.

When online virality and identity overlap, even honest people spread nonsense. The Internet rewards outrage and affirmation, not patience and nuance. Algorithms that optimize for engagement systematically amplify emotional and polarizing content—the very traits of effective bullshit. Bergstrom and West connect this to the “firehose” tactics used in disinformation campaigns: overwhelm the public with contradictory claims until distinguishing truth from falsity feels impossible.

Information Ecology and Attention Economics

If printing once empowered scholars, cheap digital publication flooded the landscape with noise. Platforms, driven by engagement metrics, now curate personal feeds that reinforce confirmation bias. A headline saying “will make you cry” outperforms one saying “is true.” Brandolini’s law captures the asymmetry: it takes far more energy to refute bullshit than to produce it. As a result, bad information compounds faster than accurate corrections can catch it.

The authors’ remedy is education in critical numeracy and media literacy. Truth-seeking in this era means checking sources, triangulating evidence, and understanding cognitive and statistical traps. But Bergstrom and West go further: they teach you to speak the language of data skeptically—seeing numbers not as truth, but as arguments requiring context.

Learning to Call It

Calling bullshit isn’t about smugness; it’s a civic skill. Done well, it rebalances the conversation toward accountability and honesty. The book provides systematic tools for every domain: inspecting samples, watching for causal leaps, re-scaling misleading graphs, and exposing algorithmic opacity. Criticism, when informed, preserves public trust rather than undermining it. The goal isn’t universal skepticism—it’s calibrated confidence: knowing when to withhold belief until claims earn it.

Across fields—journalism, science, policy, and everyday conversation—Bergstrom and West’s message is consistent. Bullshit thrives on complexity, haste, and deference. It dies under curiosity, humility, and simple arithmetic. You don’t need to open every black box or master every dataset; you just need to keep asking how things could be otherwise. Their art of skepticism is, ultimately, a form of ethical attention: a commitment to clarity in a world awash with noise.


Numbers and Quantitative Illusions

Numbers seduce you with precision, but they often hide ambiguity. Bergstrom and West teach that statistical claims deserve the same scrutiny as rhetoric. What matters isn’t only what is counted, but how it’s counted, framed, or compared. Many misleading claims thrive because they replace context with apparent rigor.

Percentages and Baselines

Percentage changes without base rates create false urgency. A health headline declaring “a 50% increase in risk” could mean almost nothing if the baseline risk is tiny. Always translate relative risk into absolute differences and population counts. When a tax rises from 4% to 6%, that’s a 2-point increase—but a 50% relative jump. Small framing changes massively alter interpretation.

Mathiness and Zombie Statistics

“Mathiness” is their word for formulaic theater—equations that look rigorous but define nothing real. The so-called VMMC “Quality Equation” or dubious trust formulas appear quantitative but lack units, making them meaningless. Equally persistent are “zombie statistics”—numbers repeated long after their origin is forgotten (“50% of papers never cited”) that survive by repetition rather than validity.

If the logic or units collapse under inspection, the aura of science becomes stage dressing. Treat every computation as a claim requiring context: who measured what, using which definitions, under what circumstances?

Orders of Magnitude

Fermi-style estimation is one of the book’s most empowering habits. When the numbers sound huge, make rough checks: how many people, objects, or transactions could plausibly be involved? National Geographic once claimed nine billion tons of plastic enter the ocean annually; total historical production stands near eight billion tons. A simple plausibility check reveals the impossibility. These habits let you detect absurd claims within seconds—no spreadsheet required.

Key Practice

Treat every number as an argument, not a fact. Ask for baselines, check units, and test plausibility with back-of-the-envelope reasoning.

When you learn to question the quantitative veneer—percentages, equations, graphs—you drain much of bullshit’s power. Numbers can illuminate truth, but only in the hands of people who remember that context counts more than computation.


Correlation, Causation, and Bias

Most misinformation hides in plain correlations. Bergstrom and West return repeatedly to the theme that association is not causation. Observational data, if misread, tell stories the data alone cannot justify. You must ask what alternative explanations exist and whether hidden variables shape both sides of an observed link.

Confounding and Directionality

When two things co-occur, one might cause the other, or both may stem from a third factor. The marshmallow-delay studies initially suggested self-control predicts life success. But replication revealed socioeconomic context explained much of the difference: wealthier homes fostered both patience and higher achievement. Without testing directionality, even well-meaning science misleads.

Selection and Sampling Bias

Selection effects distort perception without any malice. Insurance ads touting “switchers saved $500” only include those who saved enough to switch. Musician lifespan charts showing rappers die young ignore the living—young genres skew young deaths. From Berkson’s paradox to the friendship paradox, these structural illusions show that the world looks different depending on where you sample it.

The cure is design awareness: randomize when possible, inspect who was included and excluded, and test whether trends persist across data subsets. Observational studies often conceal asymmetries that causal inference tools can reveal.

Core Reminder

Every dataset tells a partial truth. Only when you know how it was gathered and filtered can you judge its claims.

Drawing arrows on a causal diagram, separating cause from consequence, or checking for hidden third factors are intellectual hygiene practices. They convert naive pattern-seeing into evidence-based reasoning—the difference between bullshit and insight.


Seeing Through Visual Spin

Visuals persuade faster than text, which makes charts both powerful tools and potent traps. Bergstrom and West show the mechanics of misleading graphs: truncated axes exaggerating differences, dual scales implying correlations, and decorative design that trades accuracy for aesthetic allure. Reading charts critically is a visual literacy essential for citizenship today.

Axes, Scales, and Bin Widths

Inverted or truncated axes can flip meaning, turning increases into decreases or minor changes into dramatic shifts. Re-scaled time axes, uneven binning (like wildly varying income brackets), and cherry-picked time slices generate false stories. Always check if a bar graph starts at zero and if intervals remain consistent. When possible, ask to see the raw scatter rather than pre-binned means; variability often weakens apparent trends.

Proportional Ink and 3D Effects

The “principle of proportional ink” states that visual area must match quantity. Violations—like bar areas doubling for small numeric changes or 3D pie slices distorting angles—convey false magnitudes. USA Today “Big Duck” infographics and Ontario’s inflated 3D election pies look professional yet misinform. Clean two-dimensional bars or lines, scaled from zero, communicate far more honestly than stylized art.

Metaphor Traps

Charts co-opt metaphors—periodic tables of anything, subway maps of corporate networks—that imply structure where none exists. Bergstrom and West call these “glass slippers”: shapes that don’t fit the data but seduce by familiarity. Real insight demands form follow meaning, not marketing.

Rule of Thumb

  • Check whether axes start at zero and intervals are uniform.
  • Question dual y-axes and decorative exaggerations.
  • Redraw misleading visuals mentally into plain charts.

Once you know these tricks, you can reclaim your intuition. Instead of being swayed by color and contour, you can read graphics for their logical structure, spotting when design becomes deception.


Science, Incentives, and Replication

Science, the antidote to bullshit, sometimes produces its own. Bergstrom and West trace how incentives, publication conventions, and statistical misuses lead to unreliable findings. They show that the credibility of science depends not only on methods but on the ecosystem of journals, funding, and prestige within which research occurs.

Understanding P-Values and Misinterpretations

A p-value shows how unusual your data would be if no real effect existed. It does not tell you the probability your hypothesis is true. Confusing these is the “prosecutor’s fallacy.” Media often translate p = 0.01 into “99% certain,” inflating weak evidence into near-certainty. P-hacking—massaging analyses to reach 0.05—turns noise into publishable “discoveries.”

The Replication Crisis

Replication projects in psychology, economics, and biomedicine reveal just how often results vanish on retesting. Begley and Ellis reproduced only 6 of 53 cancer studies; Ioannidis argued most published research findings are false under current incentives. When journals reward novelty and significance, negative or ambiguous outcomes disappear into file drawers. The cure, say Bergstrom and West, is transparent reporting, pre-registration, and valuing replication itself as scholarly contribution.

Predatory and Low-Quality Publishing

Predatory journals exploit “publish or perish” pressures, accepting bogus studies for fees. Parody papers like the Seinfeld-inspired “uromycitisis poisoning” article reveal nonexistent peer review. Even medical claims—like Mehmet Oz’s green coffee extract study—can slip through low-quality venues. You must check publication venue, data transparency, and replication record before believing a single-paper claim.

Guiding Question

Is this result part of a reproducible pattern or an isolated statistical accident supported by perverse incentives?

By recognizing how systems reward certain narratives, you purify your trust in evidence. Real science—transparent, self-correcting, cumulative—remains our best tool for truth, but only when practiced without bullshit’s temptations.


Algorithms, AI, and Accountability

Modern bullshit increasingly wears the mask of computation. Machine learning and algorithmic systems promise neutrality but often encode human bias in mathematical form. Bergstrom and West help you pierce both hype and opacity by focusing on data provenance and accountability.

How Machines Actually Learn

Machine learning flips programming logic: instead of hard-coded rules, algorithms infer them from data. That places enormous trust in training sets. When data are biased—faces from limited populations, medical images with context labels—models latch onto spurious cues. The “criminal face” detector learned smiles, not criminality; X-ray classifiers picked up the word “PORTABLE” as a pneumonia signal. These systems don’t think; they mimic patterns they’re fed.

Overfitting and Overclaiming

Complex models can reproduce noise as structure. Google Flu Trends, once hailed as predictive genius, later failed dramatically when user behavior shifted. Without causal understanding, algorithms are brittle. Cross-validation and simple models often outperform black boxes in real-world robustness.

Bias, Opacity, and Policy

Algorithmic decisions now govern bail, hiring, loans, and ads. Bias arises when proxies (zip codes, names) correlate with protected traits. The “right to explanation” under Europe’s GDPR exemplifies the growing demand for interpretability. Tools that reveal which features drive outcomes—like image saliency maps—help uncover learned discriminations. Accountability depends on such transparency.

GIGO Principle

Garbage in, garbage out: no algorithm can transcend the quality, fairness, or scope of its training data.

By replacing mystique with scrutiny, you can distinguish between credible, bounded uses of AI (like postal address recognition) and overblown promises. Ethical deployment means demanding interpretability whenever algorithms affect human welfare.


Defending Truth in a Noisy World

Wisdom, in Bergstrom and West’s hands, ends where it began: with the citizen’s responsibility to think clearly. The final chapters equip you to spot and refute misinformation in daily life without despair or arrogance. The Internet’s velocity makes skepticism vital; your attention and tone determine whether truth survives its race with spectacle.

Spotting Digital Deception

When a claim outrages you, slow down. Ask who’s telling it, how they know, and what they hope to gain. Reverse-image-search photos, read past headlines, and identify credible original sources. Domain mimicry and cropped screenshots spread faster than corrections. The illusory-truth effect—repetition breeding belief—means abstaining from sharing is as crucial as correcting.

Effective Refutation

Calling bullshit should enlighten, not humiliate. Bergstrom and West recommend clarity over confrontation: use reductio or analogies (like trees disproving “only immune systems sustain complexity”) to show inconsistency, not superiority. Fact-check your rebuttals; wrong corrections weaken trust. Choose public refutation only when stakes justify attention—public health, democratic integrity, scientific literacy.

A Culture of Critical Sympathy

Bullshit thrives on cynicism’s twin, apathy. The authors advocate charitable skepticism: assume confusion before malice, and teach as you correct. When individuals reclaim responsibility for evaluating evidence—through curiosity, math sense, and empathy—the information ecosystem slowly detoxifies.

Final Lesson

Don’t aim to win arguments; aim to restore shared reality.

In a world shaped by algorithms, incentives, and tribal media, calling bullshit is both self-defense and civic duty. It is the modern art of clear seeing—balancing skepticism with humility and truth-seeking with kindness.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.