The Great Mental Models Volume 2 cover

The Great Mental Models Volume 2

by Shane Parrish, Rhiannon Beaubien

Unravel the mysteries of social and historical phenomena through the lens of physics, chemistry, and biology. Discover how scientific concepts can be applied to everyday life, enhancing your understanding of the world and empowering your decision-making.

Thinking Better Through Mental Models

How can you make better decisions when life constantly throws you into complexity and uncertainty? In The Great Mental Models: Volume 1, Shane Parrish (creator of Farnam Street) argues that the quality of your life depends on the quality of your thinking—and that the quality of your thinking depends on the mental models you carry in your mind. Parrish contends that understanding how the world works, through timeless models drawn from multiple disciplines, enables you to think more clearly, avoid blind spots, and make wiser choices.

Mental models are simplified representations of how something works. Just as maps help us navigate physical terrain, these mental maps help us navigate reality. But unlike maps, most of our internal models are invisible; we rarely question whether they’re accurate or useful. Parrish’s mission is to help readers recognize the invisible thinking patterns guiding their decisions, refine them, and build a latticework—a system of interconnected models—that reflects reality more accurately.

Why Mental Models Matter

Parrish’s story begins with his own turning point. As a young intelligence officer after 9/11, he found himself rising quickly but feeling utterly unprepared to handle the complex human and strategic decisions before him. His formal education had trained him in computer science—not in judgment, perspective, or wisdom. That disconnect led him to study decision-making, read voraciously, and eventually discover Charlie Munger—the legendary investor who championed the idea of developing a “latticework of mental models.”

Munger’s philosophy shaped the foundation of the book: wisdom doesn’t come from raw intelligence but from combining fundamental ideas across disciplines. If you know only one thing—say, economics or psychology—you’ll end up using that one lens to see every problem. (“To the man with a hammer, everything looks like a nail.”) But when you possess multiple lenses—from physics, biology, mathematics, and psychology—you see the same problem in layers, reducing blind spots and improving decisions.

The Core Argument

Parrish argues that all great thinkers—from Darwin and Feynman to Buffett and Munger—use mental models, consciously or not. These models act as shortcuts toward understanding and help filter noise from signal. However, we can’t rely on any single discipline to navigate a complex world. The book’s central claim is that developing a multidisciplinary mental toolkit—a latticework—allows you to match the right model to the right situation.

The first volume introduces nine foundational models of general thinking. These include: “The Map Is Not the Territory,” “Circle of Competence,” “First Principles Thinking,” “Thought Experiment,” “Second-Order Thinking,” “Probabilistic Thinking,” “Inversion,” “Occam’s Razor,” and “Hanlon’s Razor.” Each model offers a way to pierce through illusion and get closer to reality, whether you’re evaluating a business decision, negotiating with others, or reflecting on personal goals.

Seeing Reality As It Is

At its heart, the book asks you to face an uncomfortable truth: your current worldview is incomplete and often wrong. We all have blind spots—formed by limited perspective, ego, or distance from the consequences of our actions. Parrish uses the myth of Antaeus, the giant who lost his strength when lifted off the ground, to drive home this metaphor. When we lose “contact with reality,” our strength—our judgment—falters. Wisdom demands constant testing of our assumptions against reality, accepting feedback, and updating our views.

From Galileo’s ship thought experiment to Darwin’s insistence on observing what “easily escapes attention,” Parrish emphasizes that reality is best understood through feedback and model-testing. Our ego often blocks this process. We’d rather be right than be accurate; we’d rather defend our identities than refine our models. The disciplined thinker, on the other hand, learns to update beliefs readily—to be more scientist than lawyer.

Building the Latticework

Parrish extends Munger’s idea: about 80 to 90 mental models cover most real-world situations. These models come from physics (gravity, energy conservation), biology (evolution, feedback loops), psychology (incentives, biases), and mathematics (probability, geometry). You don’t need to master the technical details—you need to understand their principles well enough to apply them flexibly.

The more models you have, the more reality you can see. Problems that once seemed ambiguous start to reveal structure. Moreover, the models interconnect—a concept Parrish describes as the “lattice.” It’s this interlinking of ideas that strengthens your mental architecture and makes your thinking resilient. In practice, when facing a decision, you examine it through multiple lenses: probabilities, incentives, systems, and time horizons. The cumulative result is clarity and far better outcomes.

Why It Matters Today

In an age of information overload, Parrish’s argument is deeply relevant. The world rewards those who can synthesize knowledge rather than merely accumulate it. Schools often teach specialization; they don’t teach thinking. But a multidisciplinary mindset reclaims what education should be: preparing you to understand reality, adapt, and make good decisions under uncertainty. Whether you’re managing investments, leading a team, or raising a family, this book’s models illuminate universal principles for navigating complexity.

Ultimately, The Great Mental Models: Volume 1 is not a textbook—it’s a philosophy of lifelong learning. It’s about becoming wiser rather than merely more informed, cultivating curiosity rather than defensiveness, and learning to act based on how the world really is. As Parrish reminds readers through the voices of Feynman and Munger, understanding must lead to adaptation. To think better is to live better.

The chapters that follow explore how to build that understanding—starting with maps, competence, principles, and lenses that turn insight into applied wisdom.


The Map Is Not the Territory

If you’ve ever relied on GPS and still gotten lost, you know the danger of confusing the map with the terrain. Shane Parrish opens his exploration of mental models with one of the most foundational ideas: “The map is not the territory.” Originally coined by linguist Alfred Korzybski, the concept reminds us that our representations of reality—whether charts, theories, business plans, or beliefs—are simplifications, not the real thing.

Why Maps Matter—But Mislead

All humans need maps. They reduce overwhelming complexity into manageable chunks so we can act. Financial statements condense thousands of transactions into digestible numbers; policies and procedures turn human behavior into repeatable rules. But every map excludes details. When we forget that exclusion, we risk mistaking our abstraction for reality itself. A business plan, no matter how detailed, can’t describe employees’ emotions, shifting markets, or unforeseen events. When our world changes faster than our maps, we crash into cliffs our GPS failed to show.

Truth Through Feedback

Parrish combines classic examples to show this principle in action. Newtonian physics, once the “perfect” map, was later updated by Einstein’s relativity. Newton wasn’t wrong—his models still explain most of daily physics—but they’re limited to certain contexts. Scientists, unlike most of us, constantly test and rewrite their maps. That humility is what keeps their understanding aligned with the terrain. As statistician George Box famously said, “All models are wrong, but some are useful.”

When Maps Shape Territory

A deeper danger arises when maps don’t just represent the world—they alter it. Urban planner Jane Jacobs documented how 20th-century city designers imposed neat models of “ideal cities” without understanding real human behavior. Reality didn’t fit their blueprints, yet they reshaped neighborhoods to fit their models—destroying organic communities. Similarly, economist Elinor Ostrom warned that simplistic maps of human cooperation (like Garrett Hardin’s “Tragedy of the Commons”) can mislead policymakers. Real communities often design self-regulating systems to manage shared resources, contradicting the grim model. When we apply models as dogma, we harm the very systems we’re trying to manage.

Perspective, Context, and Feedback

Parrish emphasizes three ways to use this model wisely: stay updated, know your cartographer, and seek feedback. Reality is the ultimate test—like explorers refining maps as they travel. We must ask: who drew the map, and what did they omit? Historical borders in the Middle East, for example, often reflect colonial interests more than local geography. Good decision-making means understanding the context behind the abstractions we use.

Ultimately, “The Map Is Not the Territory” trains you to maintain intellectual humility. Use abstractions as guides, not prisons. In a changing world, your greatest advantage is not having the most detailed map but knowing when it’s time to redraw it.


Circle of Competence

What if success depended not on knowing everything, but on knowing exactly what you know—and what you don’t? “Circle of Competence,” a principle championed by Warren Buffett and Charlie Munger, teaches that understanding your boundaries is one of the surest paths to good judgment. Parrish uses vivid stories—from Sherpa Tenzing Norgay’s mastery of Everest to Queen Elizabeth I’s humility in seeking counsel—to show why clarity beats confidence.

Knowing Your Territory

Inside your circle, you have deep fluency built on experience. Outside it, illusions abound. Parrish contrasts two characters: the “Lifer,” who has lived in a small town for decades and understands every interrelation, and the “Stranger,” who visits briefly yet believes he knows enough. The Stranger’s ignorance guarantees mistakes. This parallels the danger of stepping into domains we scarcely understand—making financial bets, leading new industries, or pontificating without data.

Building and Maintaining the Circle

Circles of competence form through long curiosity, feedback, and honest reflection. Parrish stresses the discipline of recording decisions and outcomes—keeping a personal journal to separate skill from luck. Doctors like Atul Gawande use coaching to see blind spots; leaders solicit honest feedback to align self-image with reality. True learning demands confronting evidence without ego. Your circle expands only through humility and iteration.

Operating Outside Your Circle

You can’t know everything. The trick is learning how to operate beyond your expertise wisely. Parrish gives three rules: learn basic principles of a new realm; consult experts, asking not just for answers but for how they think; and rely on fundamental models from multiple disciplines to bridge your ignorance. When Queen Elizabeth I took the throne during England’s turmoil, she openly admitted what she didn’t know and built a diverse council of experts. Her humility and consultation laid the groundwork for stability and growth—the mark of governing within one’s circle.

“Circle of Competence” is a lifelong calibration tool. As Buffett once said, successful investing isn’t about how big your circle is, but how clearly you know its circumference. The same holds for life: clarity of competence is power.


First Principles Thinking

When facing a problem, do you reason from assumptions—or from truth? “First Principles Thinking,” rooted in Aristotle and championed by innovators like Elon Musk, teaches you to break issues down to their fundamental truths and rebuild from the ground up. Parrish presents this model as an antidote to groupthink and lazy convention—helping you strip away assumption in search of reality.

Breaking Things Down

First principles are the bedrock facts that cannot be reduced further—like gravity in physics or incentives in human behavior. Instead of copying, you analyze: what absolutely must be true? This allows innovation. Parrish cites Temple Grandin’s redesign of livestock chutes: by observing that calm animals produce better outcomes, she rethought the system not through tradition but from the principle of animal behavior. Similarly, when scientists disproved the “sterile stomach” assumption by discovering H. pylori bacteria, they not only improved medicine but demonstrated how challenging false assumptions reshapes knowledge.

Tools for First-Principles Reasoning

Parrish highlights two practical methods. The first is Socratic questioning—a disciplined series of “why” inquiries to uncover assumptions and evidence. The second is the Five Whys method, popular in engineering: ask “why” five times until you reach a root cause. Each approach forces you to separate dogma (“because that’s how it’s done”) from fact (“because physics demands it”).

Rebuilding from Reality

Reasoning from first principles unlocks creativity because it redefines boundaries. Musk famously asked: batteries are expensive because suppliers say so, but what are the fundamental materials and their costs? That question birthed the electric car revolution. Parrish shares that even lab-grown meat emerged from similar reasoning—defining “meat” not as animal tissue but as a combination of texture, taste, and protein. Once you define the first principles, innovation follows logically.

Whenever you feel stuck, return to first principles. Ask what is true, not what is accepted. As physicist Carl Sagan noted, “Science is a way of thinking,” not just a body of facts. From business to biology, this habit keeps your ideas grounded in reality and open to reinvention.


Second-Order Thinking

Most people stop at first-order thinking: they ask “What happens next?” Second-order thinkers ask something deeper: “And then what?” This model illuminates how short-term solutions often generate long-term problems. Parrish uses examples—from antibiotic use in livestock to Cleopatra’s political strategy—to show the enormous leverage of thinking through consequences.

Seeing Beyond the Obvious

First-order thinking is easy: it’s the immediate cause-and-effect. Second-order thinking requires tracing ripple effects through systems. When British colonizers paid for dead cobras in Delhi, locals bred cobras to earn money—creating more snakes, not fewer. Similarly, mass antibiotic use in animals increased profits temporarily but created antibiotic-resistant bacteria. The law of unintended consequences is simply poor second-order thinking.

Delaying Gratification and Building Trust

Second-order thinking helps prioritize long-term outcomes over immediate comfort. Cleopatra’s alliance with Julius Caesar may have caused short-term turmoil, but it secured Egypt’s future. Likewise, trust in relationships and organizations develops by choosing actions that pay off over time—showing reliability instead of taking quick wins. Parrish reminds us that each action plants seeds for future reactions.

Applying It in Everyday Life

Before acting, ask: what happens next, and next again? What are the second- and third-order effects? This question reveals compounding loops. Garrett Hardin’s reminder, “You can never merely do one thing,” summarizes it best. In decision-making, always map system feedbacks, incentives, and delays.

In a world of complexity, second-order thinking transforms you from a reactive thinker into a strategist. You stop solving symptoms and start designing solutions that endure.


Probabilistic Thinking

Nothing in life is guaranteed. The question is not “Will this happen?” but “How likely is this to happen?” Probabilistic thinking trains you to see uncertainty as quantifiable. Parrish invites readers to think like statisticians, spies, and investors—making decisions based on likelihoods rather than illusions of certainty.

Bayesian Updating

Thomas Bayes’ key insight was simple: when you get new evidence, adjust your previous beliefs accordingly. For example, if “violent crimes double” but started from a tiny base rate, your real risk hardly changes. Bayesian thinking protects against panic by anchoring new data to established priors. You constantly revise your mental map as evidence accumulates—like a pilot recalibrating course during flight.

Understanding Extremes

Not all probabilities are created equal. Some phenomena—like human height—follow predictable “bell curves.” But others—like wealth or disaster—follow “fat-tailed” distributions, where rare events dominate. Nassim Taleb’s The Black Swan shows how underestimating these tails causes crises. Parrish urges readers to prepare, not predict. Resilience—having buffers, redundancies, and “anti-fragile” strategies—trumps foresight. Insurance companies thrive using this approach: they price risk accurately, over large enough samples, knowing any individual event is uncertain.

Action Through Asymmetry

Probabilistic thinking also reveals asymmetries in outcomes—situations where the upside exceeds the downside. Vera Atkins, who recruited WWII spies, constantly assessed probabilities under foggy conditions, balancing potential gain (vital intelligence) against the fatal risks agents faced. Similarly, sound investing looks for asymmetric bets—limited downside, unlimited upside.

By learning to think in probabilities, you escape black-or-white judgments. You stop seeking certainty and start seeking advantage. When you view decisions as weighted bets, uncertainty becomes not a threat—but an edge.


Inversion

What if solving a problem isn’t about knowing what to do—but knowing what to avoid? “Inversion,” Parrish writes, is one of the most underused yet powerful cognitive tools. Inspired by mathematician Carl Jacobi’s advice to “invert, always invert,” it means flipping a problem on its head to see it more clearly. Instead of asking “How do I succeed?” you ask “What would guarantee failure?” and work backward.

Reverse Engineering Reality

The power of inversion is illustrated through unexpected figures, from Sherlock Holmes to Florence Nightingale. Holmes often deduced truths by assuming his adversaries’ success and reasoning backward to what must be true. Nightingale used data to uncover not how to cure soldiers, but how to prevent their deaths—eliminating poor sanitation. Similarly, investor John Bogle inverted the question of “How to beat the market?” into “How to stop losing to fees and overconfidence?”—and invented the index fund.

Turning Problems Upside Down

Inversion comes in two main forms: (1) prove or disprove assumptions by flipping them (“If this were false, what else would be true?”) and (2) achieve goals by avoiding their ruin (“What behaviors would ensure failure?”). In leadership, inversion works to streamline action. IBM CEO Louis Gerstner, when asked for a revolutionary vision, responded that IBM didn’t need another grand idea—it needed to execute simply and stop doing the wrong things. By eliminating mistakes, he revived the company.

Inverting for Innovation

Marie Van Brittan Brown, a nurse worried about safety, didn’t ask, “How do I feel safe alone?” but inverted it: “What would make me feel unsafe?” Lack of visibility and communication. That led her to invent the modern home security system. Inversion frees creativity by simplifying assumptions and exposing obstacles.

Ultimately, inversion teaches that brilliance often hides in doing less: eliminating error, preventing stupidity, and cutting clutter. As Sun Tzu said centuries ago, “He wins his battles by making no mistakes.” You beat most people not by being smarter—but by avoiding what makes them fail.


Occam’s Razor

When confronted with competing explanations, choose the simplest one that fits the evidence. That’s Occam’s Razor—a model that slices away unnecessary complexity. Shane Parrish shows how this 14th-century principle of logic saves time, avoids confusion, and keeps thinking honest in an age addicted to complication.

The Case for Simplicity

William of Ockham advised against multiplying entities without necessity. The most plausible explanation usually requires the fewest assumptions. Astronomer Vera Rubin, for example, discovered galaxies didn’t spin as expected. Instead of conjuring exotic new forces, she considered the simplest consistent idea—dark matter. Her theory remains the leading explanation precisely because it fits observed data elegantly, though invisible.

Filtering Complexity

Occam’s Razor doesn’t say the world is simple—it says start simple. In medicine, this rule appears as “When you hear hoofbeats, think horses, not zebras.” Doctors check common causes first; panic arises when patients assume rare diseases. Similarly, leaders like Louis Gerstner rescued IBM not with grand complexity but by focusing on clear principles: serving customers and executing basics well.

Avoiding False Complexity

Parrish warns against mistaking complexity for intelligence. Humans often invent elaborate explanations for ego’s sake. But simplicity is powerful because it’s testable. The Los Angeles Department of Water once faced carcinogenic algae growth. Instead of billion-dollar domes, they tossed millions of simple, black plastic “shade balls” to block sunlight—an elegant, low-cost fix. The shortest route between two truths, it turns out, is often a straight line.

Occam’s Razor isn’t minimalism for its own sake. It’s disciplined reasoning. When you strip away assumptions and focus on what’s essential, clarity and accuracy follow. Complexity may impress—but simplicity persuades.


Hanlon’s Razor

Hanlon’s Razor offers a liberating rule for human relations: “Never attribute to malice that which can be adequately explained by stupidity.” In a world quick to demonize, this simple model saves energy, reduces cynicism, and improves collaboration.

Why We Assume the Worst

Our brains overreact to vivid evidence. Psychologists Daniel Kahneman and Amos Tversky demonstrated that even logical people overweight emotional or plausible details—a bias seen in their “Linda problem.” Similarly, when someone cuts you off in traffic, your gut says “They’re targeting me,” not “They didn’t see me.” This instinct for intention fuels paranoia and division.

The Power of Empathetic Logic

Parrish pairs philosophy with history. In 408 AD, Roman Emperor Honorius executed his loyal general Stilicho, wrongly assuming treachery. The empire soon fell. Centuries later, during the Cuban missile crisis, Soviet officer Vasili Arkhipov faced American depth charges. Believing the attack deliberate might have triggered nuclear war. Instead, Arkhipov guessed it was error—and saved humanity. Hanlon’s Razor isn’t just courtesy; it’s civilization.

Applying It in Life

Practicing Hanlon’s Razor means granting others “mental charity.” Before assuming conspiracy, test for incompetence or limited information. At work, it turns blame into problem-solving. In leadership, it encourages trust by default, corrected by evidence. As science-fiction writer Robert Heinlein called it, we must reject the “Devil Theory” of human affairs: most problems stem from flawed systems, not evil people.

Hanlon’s Razor doesn’t deny real malice—it just reminds us it’s rarer than error. Most people aren’t villains... they’re busy, biased, or mistaken. Adopting this view keeps you calm, resilient, and focused on fixing systems rather than fighting shadows.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.