Idea 1
The Computational Logic of Human Intelligence
Have you ever wondered how humans can solve such complex problems—and yet sometimes make absurdly simple mistakes? In What Makes Us Smart, Samuel Gershman argues that both our remarkable intelligence and our fallibility stem from the same underlying computational principles. We are not perfectly rational machines, but adaptive organisms navigating a world full of uncertainty, scarcity, and ambiguity. Gershman’s provocative thesis is that our quirks and biases aren’t design flaws—they’re the price of intelligence given limited data and limited computational capacity.
According to Gershman, to understand the human mind, we need to think like engineers of an imperfect system: one that evolved under constraints. His framework draws on cognitive science, Bayesian statistics, information theory, and neuroscience to explain how the brain approximates rational thought while staying frugal with energy and computation. Across thirteen chapters, he constructs a grand theory linking perceptual illusions, learning biases, social conformity, moral reasoning, and even language design through two central ideas: inductive bias and approximation bias.
Why Our Brains Are Biased—And Why That’s Smart
The human brain does not have the luxury of unlimited information or unlimited processing power. To make sense of the world, it must rely on biases: assumptions about how things usually work or shortcut methods that make decisions fast but occasionally flawed. Inductive biases are the deep knowledge structures—like our sense of causality, object permanence, or linguistic rules—that help us generalize from few examples. Approximation biases, in contrast, are the small, computational shortcuts our brains take to save effort, like compressing sensory input or relying on limited memory samples.
Together, these biases form what Gershman calls the computational logic of cognition: the idea that intelligent systems must balance accuracy, speed, and cost. Our minds are forced to trade precision for efficiency, and in doing so, they exhibit predictable systematic errors that can actually be understood as signatures of optimal design under constraints. In one sense, illusions and biases are rational responses to an imperfect world.
From Bayesian Brains to Social Minds
Gershman begins with perception as the most fundamental case study: how the mind interprets uncertain sensory data. Borrowing from Bayesian probability, he shows how the brain behaves like a statistical reasoner—combining prior beliefs with new evidence to infer the most probable explanation. Visual illusions like the Ponzo illusion or the Moon illusion occur because our “priors” about perspective and depth are usually reliable, even if they misfire in contrived settings.
These same inferential mechanisms extend beyond sight into how we reason about others, make moral judgments, and construct scientific theories. We don’t simply process sensory data; we construct and revise intuitive theories of the world—what Gershman calls “mental models” or “intuitive theories.” These include intuitive physics (how objects move), intuitive psychology (what people want or know), and intuitive sociology (how groups behave). Each of these theories relies on built-in biases that enable rapid learning and reasoning but also resistance to change, explaining why people cling to beliefs even in the face of contradictory evidence.
Learning, Language, and Rational Illusions
Throughout the book, Gershman moves seamlessly between experiments, anecdotes, and thought-provoking examples. Children over-regularizing past tenses (saying “runned” or “goed”) reveal compositional learning rules that generalize from limited input. Adults imitating unnecessary actions (overimitation) show the deep inferential roots of social learning. Even our tendency to conform, exhibit optimism bias, or fall for confirmation bias can be reframed as efficient strategies for learning under uncertainty.
Later chapters extend this logic into economics and linguistics: the brain as an information-compressing device (efficient coding) and language as an optimized medium balancing informativeness and effort. Gershman concludes by returning to the original paradox—our simultaneous brilliance and foolishness—arguing that both emerge from the same computational architecture. If you understand those algorithms, you not only understand why people err—you also understand what makes them uniquely intelligent.
Why It Matters
In a cultural moment dominated by AI and debates about machine intelligence, Gershman’s argument has profound implications. It suggests that the mystery of human intelligence is not its perfection but its adaptivity. We are smart because we are biased, brilliant because we are limited. By modeling those limitations, cognitive science can bridge natural and artificial intelligence. The result is a deeply optimistic view: understanding our flaws is the surest path to understanding our genius.