What We Owe the Future cover

What We Owe the Future

by William MacAskill

William MacAskill’s ''What We Owe the Future'' challenges readers to consider the moral implications of their actions today on future generations. By exploring longtermism, the book offers insights into preventing catastrophic outcomes and shaping a prosperous future through thoughtful decision-making in areas like AI safety and biotechnology.

The Moral Weight of the Future

What if you considered not only those alive today but all who might ever live? In What We Owe the Future, philosopher William MacAskill argues that morality doesn’t stop at the edge of the present. He calls this perspective longtermism: the idea that positively influencing the long-term future is a key moral priority of our time. Future people, he reminds you, are as real as you are. They may number in the trillions, and the quality of their lives depends partly on the choices made now.

MacAskill’s case begins with a vivid thought experiment: imagine living the combined lives of all people—past, present, and future. Suddenly, our moment becomes a brief flicker before an almost infinite expanse of time. From this viewpoint, even small interventions today can tilt the fate of entire future civilizations. The asymmetry is striking: it’s easy to destroy future potential but hard to recreate it once lost. Humanity, he says, is a teenager on the cusp of maturity—reckless but immensely full of promise.

Why Future People Matter

MacAskill insists that distance in time is morally equivalent to distance in space. You wouldn’t ignore someone suffering on another continent just because they’re far away; nor should you ignore someone who may live thousands of years from now. Many institutions already act for the long run—museums preserving knowledge, parks protecting nature for coming generations, and the Iroquois principle of considering the seventh generation. The book asks you to generalise that respect, extending moral concern as far into the future as possible.

How Vast the Future Could Be

Humanity might flourish for millions of years on Earth or spread across the stars for billions. Even under conservative assumptions—a million years of survival at today’s population—future people could outnumber those who have ever lived by a factor of ten thousand. The expected value of such a vast future is enormous. When multiplied by even a small chance that actions today affect that long arc, the moral stakes become staggering.

The Heart of Longtermism

Three ideas drive longtermism: (1) future people matter equally; (2) there could be an astronomically large number of them; and (3) it’s possible to shape their outcomes. Most ethical systems already prize impartiality and scale—longtermism extends these values through time. MacAskill admits that he began as a practical altruist focused on global poverty but came to see that safeguarding and improving the far future might be the most effective form of altruism possible. It’s the moral math of opportunity: even a modest chance to influence trillions of lives outweighs short-term projects by orders of magnitude.

Core Idea

If you accept that future lives count and that the future could be immense, rationality and compassion require treating the long-term consequences of your actions as one of your greatest responsibilities.

Throughout the book, MacAskill explores how to act under such cosmic responsibility. He builds frameworks for evaluating actions (the SPC model), traces historical case studies of moral change (the abolition movement), and assesses crucial technologies and risks (AI, bioweapons, climate, and stagnation). Ultimately, he offers both caution and hope: humanity stands at a hinge moment, able either to extinguish its potential or cultivate a flourishing world that endures for eons.


Reasoning About the Distant Future

MacAskill recognises that acting on such a grand timescale seems daunting. To guide decisions, he introduces the Significance–Persistence–Contingency (SPC) framework—a disciplined way to estimate the long-term value of any action. You assess how important a change is (significance), how long it will last (persistence), and how dependent it is on your intervention rather than inevitability (contingency). Multiplying these gives a rough expected moral payoff.

Using Expected Value

SPC extends the logic of expected value familiar from decision-making and poker. In both cases you weigh stakes and probabilities. A simple 50/50 bet to win £3 or lose £1 has an expected return of +£1—but when applied to centuries or millennia, even a 0.1% chance of preventing extinction could dominate any near-term project. MacAskill borrows this "thinking in bets" mindset from professional players like Liv Boeree to encourage probabilistic moral reasoning rather than moral paralysis.

Each Element Explained

  • Significance: the scale of good or harm per unit of time (e.g., eradicating slavery or developing clean energy).
  • Persistence: the duration of that improvement. Ending slavery persists across centuries, while economic booms may fade fast.
  • Contingency: how pivotal your action is. If the outcome would likely occur without you, contingency is low; if you make the difference, it’s high.

Together these factors let you compare, say, investing in malaria nets today (high significance, low persistence) versus supporting AI safety research (perhaps speculative but possibly extremely persistent and contingent). The latter could have far greater expected moral value due to its massive long-run implications.

Frameworks and Heuristics

Because longtermism deals with deep uncertainty, MacAskill adds three complementary heuristics: take robustly good actions (beneficial under many scenarios), build options (preserve flexibility and avoid lock‑in), and learn more (prioritise research and understanding). Clean-energy innovation fits all three—it curbs warming, keeps recovery energy sources available, and teaches us about resilient sustainability. Such heuristics help you act wisely when full foresight is impossible.

Practical Takeaway

Treat longtermist action as a reasoned bet. Estimate scale, duration, and necessity; focus where magnitude and persistence are huge and where your contribution is truly pivotal.

By grounding moral aspiration in quantitative reasoning, SPC turns vague care for the future into a structured, evidence-based endeavor. It doesn’t remove uncertainty, but it makes you rational about it—helping translate the moral vastness of longtermism into practical priorities.


Shaping the Future: Survival and Trajectory

Once you accept that the future matters, you need to choose where to focus. MacAskill divides the project into two main routes: preserving humanity’s survival and improving its trajectory. The first ensures there is a future; the second ensures that the future is worth having.

Survival: Guarding Against Extinction

Extinction risks include engineered pandemics, nuclear war, AI takeover, or climatic collapse. Each threatens to cut off all future generations. For instance, genome editing now allows a small team to design pathogens deadlier than anything in nature, while nuclear arsenals could trigger global famine through nuclear winter. MacAskill argues that even small extinction probabilities warrant massive investment in safety, because preventing extinction preserves astronomical moral value.

Trajectory: Steering Values and Institutions

Trajectory changes improve how societies evolve. Historical cases like abolition show that values can shift dramatically and endure for centuries. A handful of Quakers, freedmen, and political reformers—Benjamin Lay, Anthony Benezet, Olaudah Equiano, William Wilberforce—ended what seemed an eternal institution. This moral transformation was not inevitable; it was contingent on effort, timing, and conviction. Similar leverage may exist today for movements improving animal welfare or global cooperation around AI governance.

Interdependence of the Two

MacAskill likens humanity to molten glass: you must shape it carefully (trajectory) while ensuring it doesn’t shatter (survival). Moral progress reduces existential danger, while survival gives time for better values to emerge. He urges a balanced portfolio: invest both in safeguarding against catastrophe and in fostering institutions that make the long-term trajectory benevolent.

Strategic Lesson

Progress and preservation are mutually reinforcing. Secure continuity to buy time, nurture moral improvement to ensure that the time is used well.

This dual focus—preventing extinction and improving moral direction—defines longterm strategy. It transforms abstract ethics into a concrete global agenda: reduce catastrophic risks, promote virtuous institutions, and expand the range of futures that can thrive.


The Perils of Technology and Value Lock‑In

Technological power magnifies moral stakes. Among emerging risks, MacAskill singles out artificial intelligence and engineered pathogens as potential hinge points in history. Each could either lift humanity’s potential or permanently destroy—or freeze—the moral character of the world.

Artificial Intelligence and Locked Values

Advanced AI could replicate itself, enforce ideologies, or govern societies indefinitely. That endurance makes it a candidate for value lock‑in: the entrenchment of certain goals or norms for millennia. Historical analogies show how early decisions set cultural orthodoxy—Confucian dominance in China lasting centuries, for instance. With AGI, the lock‑in could be vastly stronger. Whether humans maintain oversight or build systems misaligned with human flourishing will determine what kinds of beings inherit the cosmos.

Because AI could arrive within decades (Ajeya Cotra’s forecast gives a 50% chance by 2050), MacAskill calls for urgent international cooperation on alignment and governance. The goal is not to halt progress but to ensure an open-ended “long reflection,” during which humanity can deliberate which values deserve permanence before technology freezes them by default.

The Biological Revolution

Synthetic biology offers similar double‑edged power. As DNA synthesis becomes cheaper, small labs—or individuals—could build contagions with pandemic potential. Unlike nuclear materials, genetic code is intangible and global. Historical lab leaks (from the UK’s foot‑and‑mouth outbreaks to the Soviet anthrax release at Sverdlovsk) already show how safety margins fail. Forecasting experts estimate roughly a 0.5–1% chance of engineered‑pandemic extinction this century—small, but ethically enormous.

To counter this, MacAskill recommends building robust biosafety, surveillance, and governance systems now. COVID‑19 proved civilisation can barely manage a moderate pandemic; an engineered one could end it. Biosecurity, though technically demanding, is highly tractable—policy coordination and funding can dramatically cut risks.

Common Thread

Both AI and biotechnology exemplify the same principle: power without moral wisdom invites irreversible outcomes. Longtermism urges you to shape that power before it shapes you.

Longterm safety requires foresight equal to innovation. Just as the Han dynasty’s ideological settlement lasted millennia, an AGI‑directed or bio‑scarred civilization could define the next million years. Acting early is the only way to keep humanity’s moral options open.


Collapse, Stagnation, and Recovery

Even if extinction never occurs, civilisation could collapse and fail to recover. MacAskill explores this middle ground between survival and annihilation—a scenario that would still destroy almost all potential value. History shows human resilience (Rome’s fall followed by the Renaissance, Hiroshima’s rebuild), but modern fragility is unprecedented. Nuclear arsenals, climate tipping points, and resource exhaustion could produce a global, unrecoverable breakdown.

Why Recovery Might Now Be Harder

Earlier civilisations rebuilt because low‑hanging fossil fuels and stable climates awaited them. Future societies might not be so lucky. Burn through the accessible coal and oil today, and post‑collapse descendants may never regain industrial capacity; add widespread climate chaos, and the recovery window narrows further. Research commissioned from Lewis Dartnell suggests re‑industrialising without fossil fuels would be extremely difficult. Thus, actions that conserve resources and stabilise climate have longterm insurance value beyond their immediate benefits.

The Hidden Risk of Stagnation

The opposite failure mode is technological stagnation. If growth slows permanently, humanity could remain stuck at a precarious level—too advanced for safety, too primitive for resilience. Declining fertility and diminishing research productivity could trap us for millennia with unresolved existential technologies. This is dangerous because sustained stagnation multiplies the probability that some catastrophe eventually ends civilisation before progress resumes.

Policies promoting inclusive prosperity, scientific innovation, and population stability therefore serve longterm survival. Encouraging safe AGI research, distributed knowledge, and renewable energy all fight stagnation and collapse together. MacAskill’s theme is consistent: conserve options, reduce permanent losses. Avoiding both overreach and decay is the steady path toward a robust, enduring civilisation.

Moral Priority

Decarbonise, cut risks of great‑power war, manage resources for recovery, and sustain curiosity. These efforts buy time—the one currency the future cannot replace.

Preventing extinction is paramount, but preventing unrecoverable regression or stagnation may be nearly as important. A safe and dynamic civilisation keeps alive the possibility of a flourishing future.


Population Ethics and Moral Uncertainty

Longtermism ultimately rests on how you value bringing new lives into existence. Here MacAskill draws on philosopher Derek Parfit. If happy lives have positive moral worth, then preventing extinction sacrifices unimaginable good. Parfit’s famous comparison between wars killing 99% and 100% of people shows that the last 1%—extinction itself—is infinitely worse, because it ends the entire human story.

The Repugnant Conclusion and its Lessons

In population ethics, three common views—Total, Average, and Neutrality—lead to paradoxes. The Total View (maximize total wellbeing) implies Parfit’s “Repugnant Conclusion”: a vast population with barely good lives could be better than a smaller, happier one. The Average View avoids this but produces absurd results, like preferring Hell slightly less horrible to a world of bliss. MacAskill guides you through these complications to show why perfect consistency is impossible—and why uncertain compromise is the only rational path.

Acting Under Moral Uncertainty

Because no single theory is fully convincing, MacAskill advises using expected moral value: assign credences to each ethical view and act on their weighted average. If you think the Total View is 50% likely and a Critical‑Level View (value only lives above a threshold) is 50% likely, you act as if there’s a small positive cutoff. This compromise rule avoids paralysis while honoring moral doubt. It mirrors how you treat empirical uncertainty—by averaging across possibilities rather than denying them.

Implications for Existence and Quality

Surveys suggest most lives are modestly positive: fewer than 10% of people would prefer never to have existed. But animal suffering complicates this, since factory farms impose misery on tens of billions of beings annually. Using neural‑count weighting, even small improvements in farmed‑animal welfare could rival human‑centric interventions in scale. Whether life overall is net good or bad affects whether expanding civilisation is a moral victory or a burden—but moral uncertainty again advises caution in both directions.

Bottom Line

You may never resolve population ethics, but practical humility means weighting all plausible views and acting to keep good futures possible while reducing immense suffering.

MacAskill’s message is philosophical but actionable: uncertainty need not paralyse; it can discipline judgment. The goal is neither blind population expansion nor neglect of the unborn, but careful stewardship of the possibility for worthy lives.


Building a Flourishing Long‑Term Society

If civilisation survives, what kind of future should it aim for? MacAskill distinguishes between eutopia—vast flourishing—and anti‑eutopia—vast suffering. The asymmetry between pain and pleasure is real: the worst agony can outweigh the highest bliss, yet humanity’s motivational asymmetry makes hope rational. People purposefully create beauty, knowledge, and care far more often than they create suffering for its own sake. That gives good futures a statistical advantage, even if bad ones would be worse in sheer magnitude.

Reasons for Cautious Optimism

Most historical horrors—slavery, factory farming, genocide—originated not from a love of suffering but from short‑term incentives and moral blindness. These can be corrected over time as empathy expands. Conversely, building a true hellscape would require coordinated, enduring malice, which history suggests is rare. Thus, while the potential downside of the future is enormous, the probability‑weighted expectation still favors positive outcomes—if humanity continues to learn and cooperate.

Practical Paths to Eutopia

  • Nurture moral reflection before technological lock‑in (“the long reflection”).
  • Promote universal moral values—compassion, honesty, fairness—rather than context‑bound rules.
  • Reduce sources of mass suffering such as extreme poverty and animal cruelty.
  • Encourage global cooperation to manage transformative technologies.

MacAskill stresses that optimism is not complacency. Betting on a good future means betting on humanity’s capacity to mature. It demands education, foresight, and systems that favour empathy over domination. Achieving a genuinely good long‑term civilisation could become the greatest moral project in history.

Empowering Individuals and Movements

Individuals can contribute through focused careers, strategic philanthropy, and collective action. Movement‑builders like 80,000 Hours or High Impact Athletes show how targeted communities can multiply their reach. Donations to evidence‑based causes often outweigh personal consumption changes; political engagement, scientific research, or advocacy can steer institutions toward wisdom. As MacAskill concludes, if you inspire others to act longterm, you create a legacy of compounding good across generations.

Final Reflection

Hope is rational when it’s matched with responsibility. Protect and improve the future—not because success is guaranteed, but because the scale of what’s at stake is beyond measure.

By combining humility, compassion, and strategic reasoning, MacAskill’s vision invites you to treat history’s next million years as your moral horizon—and your life as its beginning chapter.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.