Moral Tribes cover

Moral Tribes

by Joshua Greene

Moral Tribes by Joshua Greene explores the evolution of moral reasoning and its impact on modern society. Through engaging examples and thought experiments, it provides insights into resolving ethical conflicts and fostering cooperation for the greater good.

The Tragedy of Commonsense Morality

Why do morally decent groups so often clash? In Moral Tribes, Joshua Greene argues that human morality—though an evolutionary triumph—is also a design flaw for modern civilization. Our moral instincts were crafted to keep small tribes cooperative, but those same instincts now pit tribe against tribe, each confident in its own righteousness. This mismatch between evolved moral psychology and the scale of global interdependence is what Greene calls the Tragedy of Commonsense Morality.

The Parable of the New Pastures

Greene begins with a parable: four tribes of herders—the Northerners, Southerners, Easterners, and Westerners—each live by different moral rules about wealth, distribution, and fairness. When a new pasture opens, their previously stable social orders collide. Cooperation within each tribe remains strong, but between tribes, it disintegrates into conflict. Every group insists that its rules are the moral ones, and rational argument only deepens division. Greene’s point: modern disputes over taxation, welfare, healthcare, or climate change follow this same logic. They’re not mere fights over facts or greed; they’re fights between moral visions.

Two Layers of the Commons Problem

Garrett Hardin’s original “Tragedy of the Commons” showed how individually rational actions (overgrazing by each herder) create collective ruin. Greene’s tragedy operates one level up: tribes now clash because their moral heuristics themselves are incompatible. What solves Me-vs-Us problems within a tribe becomes poison in the wider Us-vs-Them world. You can’t fix that by preaching virtue or enforcing one tribe’s morality; you need a higher-level principle—a metamorality—to referee between moral tribes.

Moral Machinery for Cooperation

To understand the tragedy, Greene explains how morality evolved as a suite of cognitive programs—what he calls your moral machinery. Empathy, guilt, anger, gratitude, and shame are internal mechanisms that encourage cooperation. Biologists like Robert Trivers, Ernst Fehr, and Joseph Henrich show how these instincts build social life through kin selection, reciprocity, gossip, and punishment. Such mechanisms work brilliantly inside small, face-to-face groups, enabling trust among otherwise selfish individuals. But they are domain-specific—they evolved for life in the village, not on today’s global pasture.

From Small Groups to Planetary Tribes

When moralized cooperation scales up, it collides with human tribalism. We instinctively divide the world into “Us” and “Them,” as studies by Kiley Hamlin and implicit association research show. Culture further shapes these instincts. Some societies, like Indonesia’s Lamelara whale-hunters, evolved cooperative generosity; others, like the herding cultures in the American South, evolved codes of honor and retaliation (Cohen & Nisbett). These local adaptations become the moral foundations of entire tribes, each convinced of its fairness.

Why Reason Alone Isn’t Enough

Even when groups reason, they do so through biased fairness. Experiments by Linda Babcock and George Loewenstein show that negotiators unconsciously distort fairness to favor their side. Dan Kahan’s work on climate opinions demonstrates “identity-protective cognition”: the smarter you are, the better you rationalize your tribe’s beliefs. So moral disagreement is rarely corrected by better information—it’s rooted in psychology.

The Need for a Metamorality

Greene’s solution is not moral relativism or endless cultural empathy; it’s metamorality—a charter for inter-tribal cooperation. We already rely on impartial reasoning in science and economics; Greene wants the same logic applied to ethics. If tribal intuitions fail at the global scale, then rational, evidence-based moral reasoning must take over. The rest of Moral Tribes unpacks how your mind toggles between emotional autopilot and reflective manual mode, why certain moral reactions feel sacred, and why a utilitarian “common currency” may be the only hope for peaceful coexistence.

Core takeaway

The Tragedy of Commonsense Morality reveals the central tension of modern ethics: instincts that once secured small-scale cooperation now fuel large-scale discord. The way forward, Greene argues, is to rise above tribal intuition and design a rational moral system fit for a connected world.


The Dual-Process Moral Mind

Greene’s central scientific claim is that morality operates through a dual‑process system: a fast emotional autopilot and a slower rational manual mode. These correspond to two neural networks—ventromedial and dorsolateral prefrontal cortex—that sometimes cooperate and sometimes clash.

Autopilot vs Manual Mode

Automatic processes generate the moral "feel" of right and wrong. They are ancient, emotionally charged, and quick. Manual processes involve conscious reasoning, weighing costs and benefits, and applying abstract principles. Greene’s camera metaphor captures this trade-off: let the automatic setting handle normal snapshots, but switch to manual when the scene changes—when moral options collide across tribes.

The Trolley Problem

Greene uses Philippa Foot’s and Judith Thomson’s trolley dilemmas to expose the conflict. You’ll likely flip a switch to save five lives by diverting a train—but refuse to push a man from a bridge to stop it. Consequences are identical; intuitive reactions differ. fMRI results show stronger emotional activation (VMPFC and amygdala) in the pushing scenario, while cognitive control regions (DLPFC) engage for the switch. Patients with VMPFC damage—like Phineas Gage—often give utilitarian answers without emotional qualms. That implies some moral intuitions stem from emotional architecture, not reason.

When Each System Works Best

Emotions evolved as efficient heuristics: disgust protects against disease, guilt maintains cooperation, anger enforces fairness. They’re excellent for local Me‑vs‑Us problems—don’t steal, don’t betray. But for large-scale Us‑vs‑Them problems (climate change, global justice) autopilot falters because such scenarios are evolutionarily novel. Manual deliberation—slow, statistical, reflective—handles those conditions better.

Cognitive Conflicts and Control

Neurologically, the anterior cingulate cortex acts as a conflict monitor, signaling when intuition and reasoning collide. You literally "feel" moral discomfort when your automatic judgment meets counterevidence. Under cognitive load or time pressure, you rely more on emotion; relaxation and deliberation favor utilitarian results. Even drugs that modulate serotonin change your judgments—raising harm aversion or promoting calm calculation (Crockett et al.).

Pragmatic Implications

Greene’s practical advice: trust intuition for everyday decency but switch to manual when moral intuitions diverge across tribes. Doctors treating individual patients may rely on empathy; public-health professionals, who juggle populations, require utilitarian reasoning. Both are moral, but on different levels. The trick is learning when to stop and think, using reasoning not to suppress empathy but to generalize it.


The Moral Gizmo

What triggers your moral alarm? Greene calls it the antiviolence gizmo: an evolved module that screams when an action feels personally violent or intentional. It’s adaptive in everyday life but deeply biased. Understanding its structure explains why some harms outrage you while others pass unnoticed.

Personal Force and Myopic Modules

Your gizmo reacts most intensely to direct, deliberate physical harm. Thus you recoil at pushing someone even to save more lives. The module is myopic: it inspects the direct causal chain of your plan—what you intentionally do—and misses side effects several steps away. Greene’s “modular myopia hypothesis” integrates neuroscience with action planning theories (Marc Hauser, John Mikhail). You judge personal acts of force (pushing, striking) as morally different from remote, interface-based acts (flipping a switch) even if consequences match.

Experimental Evidence

Multiple variants of the trolley problem confirm this asymmetry. When harm is remote—triggered by a switch that releases a man onto the track—utilitarian approval rises from 31% to nearly 80%. Intermediate cases (switch on the footbridge or using a pole) fall in between. Physiological studies (Cushman & Mendes) show people performing simulated violent actions exhibit stress reactions; observers do not. Infants already discriminate between action and omission, suggesting domain‑specific mechanisms built early in development.

Policy Implications

Your alarm is both useful and dangerous. It resists atrocities like murder but can veto rational reforms. Greene cites physician‑assisted suicide debates: medical associations reject active euthanasia because it “feels wrong,” even when the intention is mercy. Yet the same alarm underreacts to climate change and mass suffering—a blind spot for statistical, distant harms. This mismatch between moral salience and real impact distorts policy priorities.

Treat the gizmo’s alarms as moral data, not verdicts. Respect them when they guard against interpersonal cruelty but override them when the stakes involve vast, impersonal consequences that your intuition can’t feel.


Toward a Common Moral Currency

Having diagnosed moral tribalism and cognitive bias, Greene proposes a metamorality: utilitarianism, or as he reframes it, deep pragmatism. This is the idea that moral disagreements can be adjudicated by appealing to one shared metric—the experience of well-being and suffering—counting everyone equally.

Why We Need a Common Currency

When tribes with conflicting values argue, they often use incompatible currencies—one says liberty, another equality, another purity. Negotiation is impossible unless claims can be translated into comparable units. Greene proposes happiness (understood broadly as quality of experience) as that unit. This echoes Bentham and Mill but stripped of 19th‑century metaphysics: the only ultimate bad is suffering, the only intrinsic good is flourishing experience.

Pragmatic Utilitarianism

Greene calls his version “deep pragmatism” to distance it from caricatures of rigid calculus. Real humans must balance impartiality with personal commitments. You don’t need to be a saint or “happiness pump”; you just need to weigh aggregate outcomes honestly. When faced with moral conflict, ask: which policy really makes more people better off? Not which tribe it flatters.

Empirical and Psychological Support

Experiments show people’s intuitions align roughly with utilitarian fairness when stripped of bias. Greene’s “happiness buttons” thought experiment—choose whose pain to prevent—reveals that nearly everyone, across cultures, values greater good. Yet proximity and identifiability biases distort those impulses. Peter Singer’s drowning child example and Greene & Musen’s replications show distance halves moral motivation. Similarly, Paul Slovic’s “identifiable victim” studies show empathy fades rapidly when numbers grow—a cognitive flaw utilitarian reasoning can correct.

Real‑World Applications

Practical utilitarianism means using data to save real lives. Green endorses evidence-based altruism—organizations like GiveWell and the Against Malaria Foundation that deliver maximum well-being per dollar. He also rejects the “wealthitarian fallacy”: confusing wealth or rights rhetoric with human experience. In the real world, oppression rarely increases overall happiness because it inflicts suffering on conscious beings who matter equally.

Metamorality doesn’t require unanimity—it requires translation. When you evaluate claims using a common experiential currency, cross‑tribal cooperation becomes possible on shared rational ground.


Justice and Reform

Nowhere is the tension between moral intuition and utilitarian reasoning clearer than in criminal justice. Should punishment exist to satisfy moral outrage or to promote social good? Greene weighs retributivism against consequentialism and urges pragmatic reform.

Retributive Taste vs Practical Outcomes

People naturally believe offenders “deserve” pain—a moral taste shaped by evolutionary deterrence. Yet retributive instincts can sustain harmful institutions like mass incarceration. Utilitarianism reframes punishment’s purpose: deterrence, rehabilitation, and incapacitation. When harsh conditions produce more crime, they fail morally and pragmatically. Greene emphasizes that real utilitarianism doesn’t justify punishing innocents—because in practice, distrust and systemic abuse would create more suffering, not less.

The Psychology of Revenge

Studies by Small and Loewenstein show that identifiable wrongdoers evoke stronger punishment impulses. The “Magistrate and the Mob” thought experiment reveals moral tension: you refuse to punish an innocent even when utilitarian arithmetic suggests saving the town from riot. Greene interprets this as another case of evolved alarm—morally useful locally, unreliable globally.

Reform and Policy

A metamorality of outcomes demands data-driven policy: test whether incarceration deters, whether restorative programs reduce recidivism, whether punishment inflicts collateral harm. Greene insists on transparency: Focus on facts. Humane reforms are not idealistic—they’re scientifically grounded moral progress. Reducing prison rape, offering rehabilitation, and designing fairer deterrence systems express empathy aligned with reason.

Justice should feel moral and be moral. Satisfying anger isn’t enough; social systems must actually reduce suffering and increase safety.


Deep Pragmatism and Modern Living

Greene’s closing chapters summarize how to live wisely amid moral tribes. His philosophy of deep pragmatism fuses psychological humility with utilitarian reasoning. It offers personal and political guidance for moral life on the “new pastures.”

Rights, Tribes, and Persuasion

Moral tribes anchor their values in sacred “rights.” Greene urges you to treat rights as pragmatic tools: use them to defend won progress (abolition, civil rights) but not as opening chess moves in debate. Invoking rights too early halts dialogue; using them strategically protects hard‑earned victories.

Six Rules for a Global Tribe

Greene distills his vision into six practical rules:

  • Consult but don’t trust your instincts—shift to manual mode when moral conflict appears.
  • Use rights to end, not start, arguments.
  • Anchor debate in facts and evidence.
  • Recognize biased fairness and correct it.
  • Use a common currency of well‑being to negotiate across tribes.
  • Give—close the gap between local empathy and global responsibility through effective altruism.

What Deep Pragmatism Asks of You

Greene’s moral modernity doesn’t demand perfection. It asks you to notice when your gut is tribal, to engage manual mode when scale or diversity outpaces intuition, and to care impartially when possible. This blend of empathy and evidence creates moral progress without utopianism. If the tragedy of commonsense morality began when tribes met, its resolution may come when tribes learn to reason together.

Deep pragmatism teaches you to use your humanity twice: feel compassion instinctively, then reason deliberately about where it will do the most good.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.