Idea 1
The Moral Weight of the Future
What if you considered not only those alive today but all who might ever live? In What We Owe the Future, philosopher William MacAskill argues that morality doesn’t stop at the edge of the present. He calls this perspective longtermism: the idea that positively influencing the long-term future is a key moral priority of our time. Future people, he reminds you, are as real as you are. They may number in the trillions, and the quality of their lives depends partly on the choices made now.
MacAskill’s case begins with a vivid thought experiment: imagine living the combined lives of all people—past, present, and future. Suddenly, our moment becomes a brief flicker before an almost infinite expanse of time. From this viewpoint, even small interventions today can tilt the fate of entire future civilizations. The asymmetry is striking: it’s easy to destroy future potential but hard to recreate it once lost. Humanity, he says, is a teenager on the cusp of maturity—reckless but immensely full of promise.
Why Future People Matter
MacAskill insists that distance in time is morally equivalent to distance in space. You wouldn’t ignore someone suffering on another continent just because they’re far away; nor should you ignore someone who may live thousands of years from now. Many institutions already act for the long run—museums preserving knowledge, parks protecting nature for coming generations, and the Iroquois principle of considering the seventh generation. The book asks you to generalise that respect, extending moral concern as far into the future as possible.
How Vast the Future Could Be
Humanity might flourish for millions of years on Earth or spread across the stars for billions. Even under conservative assumptions—a million years of survival at today’s population—future people could outnumber those who have ever lived by a factor of ten thousand. The expected value of such a vast future is enormous. When multiplied by even a small chance that actions today affect that long arc, the moral stakes become staggering.
The Heart of Longtermism
Three ideas drive longtermism: (1) future people matter equally; (2) there could be an astronomically large number of them; and (3) it’s possible to shape their outcomes. Most ethical systems already prize impartiality and scale—longtermism extends these values through time. MacAskill admits that he began as a practical altruist focused on global poverty but came to see that safeguarding and improving the far future might be the most effective form of altruism possible. It’s the moral math of opportunity: even a modest chance to influence trillions of lives outweighs short-term projects by orders of magnitude.
Core Idea
If you accept that future lives count and that the future could be immense, rationality and compassion require treating the long-term consequences of your actions as one of your greatest responsibilities.
Throughout the book, MacAskill explores how to act under such cosmic responsibility. He builds frameworks for evaluating actions (the SPC model), traces historical case studies of moral change (the abolition movement), and assesses crucial technologies and risks (AI, bioweapons, climate, and stagnation). Ultimately, he offers both caution and hope: humanity stands at a hinge moment, able either to extinguish its potential or cultivate a flourishing world that endures for eons.