On Intelligence cover

On Intelligence

by Jeff Hawkins, Sandra Blakeslee

Explore how the complex functions of the human brain can unlock the potential for creating intelligent machines. Authors Jeff Hawkins and Sandra Blakeslee reveal why current computers lack true intelligence and how future technological breakthroughs could revolutionize our world, promising incredible benefits rather than threats.

The Brain as a Prediction Machine

Why do humans effortlessly recognize a face in an instant, catch a baseball in midair, or predict what a friend will say next—while supercomputers struggle with the simplest of these tasks? In On Intelligence, Jeff Hawkins, inventor of the PalmPilot and founder of the Redwood Neuroscience Institute, presents a bold answer: the human brain is fundamentally a prediction machine. Intelligence, he argues, is not about behavior, logic, or problem-solving in the abstract—it's about using memory to predict the future.

Hawkins’s thesis, which he calls the memory-prediction framework, proposes that intelligence emerges from the brain’s ability to store sequences of patterns, recall them, and use these memories to constantly anticipate what comes next. The brain’s neocortex—its large, six-layered outer shell—is not a computer that calculates but a vast memory system operating on one elegant principle across all regions. From sight and touch to music and language, the cortex uses stored memories of sequences to predict upcoming experiences. When predictions are accurate, we feel understanding; when they’re wrong, we notice and learn.

From Silicon Valley to Neuroscience

Hawkins’s unusual career shaped his perspective. As the creator of handheld computers, he developed a passion for building machines that resemble human thinking—but found that the field of artificial intelligence (AI) had failed to address the true nature of intelligence. Disillusioned by AI’s focus on mimicking behavior rather than understanding the brain, he trained himself as a neuroscientist. What frustrated him most was the absence of an overarching theory that could explain how intelligence actually arises. Neuroscience, he noted, had mountains of data but no unifying map. Psychology and computer science, on the other hand, tried to model intelligence without grounding it in biology. Hawkins aimed to bridge this divide: to combine the computational perspective of an engineer with the biological realism of the neuroscientist.

This interdisciplinary mission led Hawkins to establish the Redwood Neuroscience Institute in 2002, dedicated to understanding the neocortex’s computational principles. He believed that the only way to create truly intelligent machines—what he calls “real intelligence” as opposed to “artificial intelligence”—was to first understand the brain’s algorithm for intelligence. Machines could then be built that didn’t just simulate human behavior but actually thought in analogous ways.

Why Prediction Defines Intelligence

At the heart of Hawkins’s theory is a profound redefinition of intelligence: not as logic, behavior, or problem-solving per se, but as the ability to predict the future based on past experience. The brain constantly receives streams of sensory input—from light, sound, touch, and internal sensations—and builds a model of how the world unfolds over time. Each experience is stored as a sequence of patterns in the neocortex. When new input arrives, the brain searches for familiar patterns and anticipates what should follow. If expectations are met, we understand; if they are violated, we pay attention and learn. This constant process of matching prediction to reality is what we experience as perception, thought, and consciousness.

Imagine walking through your front door. You don’t consciously think about the feel of the doorknob, the sound of the hinges, or the weight of the door—it all feels automatic. But if someone were to shift the knob by just an inch, you would immediately sense that something was off. That’s because your brain was predicting the feel and location of the knob, the resistance of the door, and the sound it should make. Dozens of regions in your cortex were running parallel predictions at lightning speed. This simple example, Hawkins notes, captures the essence of intelligence: the ability to make and update predictions based on memory.

The Neocortex: One Algorithm Everywhere

Building on the insights of neuroscientist Vernon Mountcastle, Hawkins argues that every region of the cortex carries out the same fundamental algorithm—a universal process for modeling the world through sequences of patterns. The visual, auditory, and motor regions all use the same cortical structure. What makes one area “visual” and another “verbal” is the kind of input it receives and the connections it forms—not any unique “vision” or “language” circuitry. Experiments have shown that if a newborn ferret’s visual nerve is rewired to the auditory cortex, the animal learns to see with brain tissue that normally hears. The cortex, then, is a general-purpose learning engine that can model any kind of structured input. This insight suggests that genuine machine intelligence can be built by replicating the cortical algorithm rather than mimicking human behaviors.

What the Book Covers

In the chapters that follow, Hawkins contrasts his model with failed approaches in AI and neural networks, explores the structure of the brain’s six-layered hierarchy, and explains how memory, prediction, and hierarchy combine to produce thought, creativity, and consciousness. He explores why intelligence emerged in biology, how creativity results from predictive analogy, and why machine intelligence, properly built, need not threaten humanity. By the end, he paints a future where intelligent machines augment rather than replace us—tools that think, learn, and discover patterns as nature does.

At its core, On Intelligence invites you to see your own mind not as a mysterious black box but as an elegant, pattern-learning, future-predicting engine—one whose principles might soon transform both neuroscience and technology. Understanding it, Hawkins insists, is the key to understanding not only how we think but what intelligence truly means.


Why Artificial Intelligence Went Wrong

Artificial intelligence, as it evolved through the twentieth century, promised machines that could think. Early pioneers like Alan Turing, Warren McCulloch, and John McCarthy envisioned a future where computation could replicate human reasoning. But Hawkins argues that AI failed because it misunderstood what intelligence actually is. While researchers tried to simulate human behavior—playing chess, translating languages, solving puzzles—they ignored the one organ that already performed these feats: the brain itself.

Symbol Manipulation Without Understanding

Classic AI was built on the idea that intelligence could be achieved by manipulating symbols according to rules. Alan Turing’s model of computation—the Turing Machine—became the foundation. If a machine could manipulate abstract symbols fast enough, researchers reasoned, it could simulate intelligent behavior. This logic led to the Turing Test: if a human couldn’t tell whether a computer or a person was responding, the computer must be intelligent. Yet, Hawkins notes, this test reduced intelligence to behavior, not understanding. It judged outputs, not internal processes.

Philosopher John Searle illustrated this problem with his famous Chinese Room thought experiment. A man in a room follows instructions to manipulate Chinese characters without knowing their meaning. To an outside observer, the answers seem fluent—but inside, there is no understanding. Computers, Hawkins argues, are just like the man in the room. They follow rules without knowing what they mean. Understanding is not about behavior—it’s about building an internal model that can make predictions about the world.

Neural Networks: A Step, But the Wrong Kind

In the 1980s, excitement shifted to neural networks—mathematical systems loosely inspired by the brain’s neurons. These networks learned to recognize patterns, such as spoken sounds or written letters, by adjusting the “strength” of connections through training. Projects like NetTalk, which taught a computer to read text aloud, made headlines. But Hawkins quickly grew disillusioned. Neural networks ignored three essential features of real brains: time, feedback, and hierarchical structure.

  • Unlike real brains, neural networks processed static inputs rather than sequences that unfold over time.
  • They lacked feedback—information flowing backward that helps the brain refine its predictions.
  • Their architectures were simplistic, often just three layers deep, compared with the brain’s vast, repeating six-layered hierarchy.

Because of these omissions, neural networks could classify static patterns—like recognizing a digit—but failed to understand, remember, or generalize. They had no sense of past or future. Hawkins compares these systems to studying a few transistors and thinking you’ve understood how an entire computer works. A true theory of intelligence, he contends, must explain how brains use temporal sequences and feedback loops to anticipate what happens next.

The Input-Output Fallacy

The fatal flaw in both AI and neural networks, Hawkins writes, is the “input-output fallacy”—the belief that intelligence is defined by behavior. Intelligence doesn’t reside in outputs like speech or motion; it resides in the brain’s internal capacity to form predictions from stored memory. A person can be intelligent lying silently in the dark, thinking and understanding. What matters is not what we do, but what we foresee. Behavior, he reminds us, is merely the visible manifestation of the brain’s internal prediction engine.

By dismissing biology, both AI and neural networks missed the point entirely. The brain is not a computer that follows step-by-step instructions—it’s a memory-based system that learns the structure of the world and projects it forward. Until machine intelligence operates on this principle, Hawkins insists, computers will remain fast but mindless—mere Chinese Rooms flipping pages, blind to meaning.


The Neocortex: One Algorithm, Many Powers

The neocortex is the seat of human intelligence, language, creativity, and consciousness. To understand how it produces such a range of powers, Hawkins draws on a remarkable insight made by neuroscientist Vernon Mountcastle. In 1978, Mountcastle argued that every part of the neocortex looks essentially the same under the microscope—six layers of repeating cell types performing similar computations. If vision, hearing, touch, and language all share this same structure, Hawkins asks, might they all use the same algorithm?

A Universal Pattern Machine

Mountcastle proposed that the cortex is not divided by function as much as by connection. What makes one region visual and another auditory is the input it receives, not a difference in circuitry. Hawkins takes this idea to its logical extreme: the cortex is a general-purpose pattern-learning device. Its six-layered structure stores sequences of patterns, predicts what will come next, and updates its predictions through feedback. Every patch of cortex—from the area recognizing faces to the one processing music—runs the same algorithm.

Evidence for this idea comes from stunning experiments in neural plasticity. When researchers rewired the optic nerves of ferrets to their auditory cortices, the animals learned to see using “hearing” tissue. Similarly, blind individuals use their visual cortices to read braille. These discoveries reveal that the cortex doesn’t care what kind of data it receives; it learns whatever patterns arrive. This generality, Hawkins notes, means that the same architecture capable of sight can also compose symphonies, invent calculus, or imagine universes—achievements that differ only in scale and input, not in core operation.

Hierarchies Within Hierarchies

Hawkins extends Mountcastle’s insight with his model of a hierarchical cortex. Each level of the hierarchy processes increasingly abstract representations. For example, in the visual system: cells in area V1 detect edges and angles; higher regions like V4 recognize shapes and objects; and at the top, the inferior temporal (IT) cortex identifies faces or buildings. Information flows upward as details are assembled into wholes, and simultaneously downward as expectations guide perception. You “see” the world not just as it is, but as your cortex predicts it should be.

This dual flow—bottom-up sensory data and top-down prediction—is constant and pervasive. Hawkins points out that even as you read these words, your visual cortex is sending more signals downward than it receives from your eyes. Predictions dominate perception. The neocortex, in essence, uses stored knowledge of the world’s regularities to interpret noisy, incomplete data. That’s why you perceive a stable, colorful world even though your eyes make rapid, jerky movements (saccades) three times a second.

Patterns Within Patterns

To make sense of reality, the cortex exploits the world’s hierarchical, nested structure. A room is part of a house; a wall contains windows; a window has frames and latches. Likewise, words form sentences, and notes form melodies. The cortex mirrors this hierarchy: lower regions store fine details, higher ones, entire contexts. When you understand the song “Somewhere Over the Rainbow,” your brain links thousands of tiny auditory patterns into a continuous, predictable sequence. That hierarchy of sequences—patterns within patterns—is the essence of understanding.

By seeing the cortex as one vast, recursive algorithm, Hawkins unifies the mysteries of perception, memory, and imagination. Vision, language, and thought are not separate tricks of evolution but different expressions of the same predictive computation. The neocortex, he concludes, is not a patchwork of specialized modules but a single, powerful algorithm that models the structure of the world—and the self—through memory and prediction.


Memory as the Engine of Thought

If intelligence is prediction, then memory is its fuel. In Hawkins’s model, the cortex does not compute logical operations like a traditional computer; it remembers sequences and uses them to anticipate the future. Every perception, from a melody to the feel of a door handle, unfolds over time. The cortex stores each sequence as patterns of neural activity forming what Hawkins calls a “memory-prediction cycle.”

How Memory Works in the Cortex

Your brain stores not static images but temporal flows. To recall your home, you mentally “walk” through it—you can’t visualize it all at once. Memories are stored as sequences and recalled the same way. This explains why songs, routines, and even habits feel serial. You can sing “Happy Birthday” forward but probably not backward. This sequential recall also makes the cortex an auto-associative memory—it can retrieve a whole sequence when given a small cue. A single note reminds you of a symphony; a scent evokes a childhood scene. Like Marcel Proust’s madeleine, a fragment of input can auto-activate an entire network of stored experiences.

Invariance: Recognizing the Same in the Different

The cortex doesn’t store raw sensory data. It memorizes the relationships—the invariant features that persist despite change. You recognize your friend’s face whether she’s smiling, in shadow, or far away because your brain has stored her face in an invariant form. Every time you interact with an object or idea, your cortex abstracts the enduring pattern that defines it, ignoring the details that vary. Seeing, hearing, touching, and moving through life refine these invariant representations, letting you apply old knowledge to new contexts. This ability to generalize is what lets a musician recognize a melody in any key or a child recognize a dog from multiple breeds.

Why Memory Beats Computation

The difference between computers and brains, Hawkins emphasizes, comes down to time. Computers perform billions of operations per second but have to compute answers step by step. Neurons are slow—only about 200 operations per second—but they work in parallel and rely on stored solutions rather than real-time calculation. What takes a computer millions of logical steps, a brain can solve in a hundred neural hops. A baseball player catching a fly ball isn’t calculating trajectories; his brain is recalling a learned sequence of movements and outcomes. In this way, evolution turned memory into computation. The brain is fast precisely because it remembers rather than computes.

By redefining memory as prediction-in-waiting, Hawkins restores it from a passive archive to an active mechanism. Intelligence, creativity, and consciousness all emerge from memory’s capacity to compare expectation with reality. Each time the world surprises us, the brain adjusts its stored sequences—and in that small act of learning, intelligence grows.


Creativity, Consciousness, and the Predictive Mind

Most people think creativity is magical—a spark of inspiration that appears from nowhere. Hawkins disagrees. Creativity, he argues, is prediction by analogy. Your brain continuously compares what’s happening now to patterns from the past, mixing and recombining them into new predictions. Every act of creation—from solving a math problem to composing Shakespearean metaphors—is the cortex doing what it always does: finding analogies between patterns.

Analogy as Everyday Intelligence

You use this kind of creativity constantly. When you enter a restaurant you’ve never visited, you already predict there will be a restroom, where it’s likely located, and what signs to look for—all by analogy to other restaurants. Likewise, when Hawkins first played a vibraphone, he could do so because his brain recognized analogies between its metal bars and piano keys. Creativity is not limited to art or genius—it happens whenever you use stored patterns to navigate new situations. In Hawkins’s words, “To know the world is to see what’s next.”

The Two Sides of Consciousness

Hawkins splits consciousness into two levels. The first is self-awareness: the brain’s continuous monitoring of its own predictions and errors. This everyday consciousness correlates with forming declarative memories—memories you can describe verbally. If you could erase those memories, as in a thought experiment Hawkins proposes, you’d lose the feeling of having been conscious, even though the body still acted intelligently. The second aspect is qualia—the subjective feeling of sensations. Why does red look different from blue, or touch feel different from sound? Hawkins suggests this may come from differences in the structure of sensory inputs or from subcortical wiring outside the neocortex. Whatever its source, consciousness arises from prediction: self-awareness is simply the cortex modeling itself modeling the world.

Mind, Soul, and Imagination

Your “mind,” Hawkins says, is what the brain does. Because the cortex models both the external world and your body but not itself, your thoughts feel separate from your physical being. This gives rise to the familiar sense of having an independent mind or soul. Imagination, in turn, is what happens when predictions loop back as inputs. When you picture a memory or plan a move in chess, your brain is running its predictive model in isolation—seeing and hearing its own expectations. Thinking, imagining, and daydreaming are thus not magic, but the cortex running simulations of the future.

Creativity and consciousness, then, are not extraordinary gifts—they are natural consequences of a brain that endlessly predicts. Intelligence, for Hawkins, is not about having thoughts—it’s about thinking ahead.


Building Intelligent Machines

If the neocortex is an algorithm, Hawkins insists, we can replicate it in silicon. But what he envisions is not the humanoid robot of science fiction—it’s something subtler, more powerful, and less threatening. Intelligent machines of the future will not imitate people; they will learn and predict like brains.

Beyond Robots and Artificial Minds

Movies and novels have conditioned us to imagine intelligent machines as talking androids, from HAL 9000 to C-3PO. Hawkins rejects this image. True intelligence doesn’t require emotions, bodies, or personality—it requires a memory-prediction hierarchy. Intelligent machines will function more like the neocortex than like humans: they will sense, remember, and predict within their domains. A smart car might understand traffic flow; an intelligent microscope might model patterns in cells. Each will develop its own form of understanding, shaped by its sensory world.

Challenges and Possibilities

To build these machines, we’ll need to solve two major technical challenges: capacity and connectivity. The human cortex has roughly 30 trillion synapses; replicating that requires immense memory storage and efficient interconnections. But with modern computing advances, this scale is achievable. The bigger challenge is architectural: designing hierarchies that allow learning and feedback. Hawkins envisions machines that sense patterns, build hierarchical models, and learn through repeated exposure—much like children exploring the world.

He also rejects the fears that intelligent machines will rebel or enslave humanity. Intelligence, he stresses, is separate from emotion. Machines that predict don’t have drives, desires, or anger unless we build those into them. Like the telephone or the transistor, they will be tools—powerful but benign—that expand our ability to understand complexity. The real danger lies in misunderstanding them, not in their rebellion.

A New Technological Epoch

Hawkins foresees a revolution akin to the invention of the computer. Intelligent memory systems could analyze global weather patterns, anticipate energy consumption, monitor disease outbreaks, or explore physics beyond human intuition. Machines could sense through sonar, radar, or even molecular vibrations and learn to “see” worlds invisible to us. With vast capacity and speed, they could model the dynamics of economies, ecosystems, or galaxies—a kind of mechanical imagination. Like the cortex, these machines could store patterns within patterns, learning the deep structure of reality.

The goal, Hawkins concludes, is not to recreate humans but to amplify intelligence itself—to build machines that help humanity learn faster and see further. The future of intelligence, biological or artificial, depends on our understanding of prediction. Once we master the brain’s algorithm, we won’t just build smarter tools—we’ll build new ways of understanding the universe.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.