Idea 1
Algorithms to Live By
How can computer science help you make better everyday decisions? In Algorithms to Live By, Brian Christian and Tom Griffiths argue that the tools of computer science are not only for machines—they’re for minds. By translating problems of love, work, and life into algorithmic form, they reveal how to think more clearly and act more effectively when facing uncertainty, complexity, or limited time.
The book’s central insight is that life itself poses the same kinds of computational dilemmas that computers face: when to stop searching, how to explore versus exploit, how to prioritize, and how to store and retrieve memories efficiently. The authors show that algorithms—once stripped of their technical clothing—become surprisingly humane. They turn abstract logic into guidance for dating, hiring, organizing, learning, and even forgiving others.
Everyday Life as Computation
Christian and Griffiths treat the ordinary—choosing apartments, planning your schedule, or managing your inbox—as examples of computational classes. Life continually forces you into trade-offs among speed, accuracy, memory, and exploration. The book’s opening principle, the 37% Rule (from optimal stopping theory), captures this perfectly: if you face sequential choices, sample and calibrate during the first 37% of possibilities, then choose the next that surpasses the best you’ve seen. The same reasoning governs decisions about love, hiring, or when to stop looking for parking—each a variation on the secretary problem.
Beyond dating or job searches, life’s dynamic environments demand continual learning. The explore/exploit dilemma (from the multi-armed bandit problem) models when to try new options versus stick with the best-known. Early in life—like a system with a long horizon—you should explore widely; later, when time is short, you exploit your accumulated knowledge. Algorithms like the Gittins index and the Upper Confidence Bound (UCB) formalize this wisdom, recommending optimism under uncertainty: assume each new option could be the best until evidence proves otherwise.
Design, Scale, and Organization
Sorting, caching, and scheduling might seem dull realms of computer architecture, but they underlie how you organize information and decide what to do next. Sorting teaches you to ask how much order is truly worth. When small, frequent queries justify a fully organized system; when rare, it’s wasteful. Caching and the memory hierarchy reveal why you should keep frequently used items nearby—your desk as the LRU (Least Recently Used) cache of your working life. Scheduling theory reframes time management: should you minimize lateness (Earliest Due Date) or maximize throughput (Shortest Processing Time)? Each algorithm captures a different philosophy of productivity, and their trade-offs mirror your own tensions between urgency, importance, and flow.
Uncertainty, Noise, and Good Enough Decisions
The book then tackles the statistical heart of good judgment: Bayesian reasoning. Combining prior beliefs with new evidence lets you predict intelligently even from tiny samples, as Laplace, Bayes, and Gott showed. The idea of priors runs throughout life: your expectations about durations, outcomes, or probabilities shape how you learn and act. From predicting a project’s duration to assessing risk, Bayesian reasoning teaches humility and correction—you learn not from certainty, but from incremental updates.
But perfect rationality is illusory. Overfitting—when your model describes noise instead of truth—plagues people as much as machines. The cure is regularization, or valuing simplicity and penalizing complexity. Early stopping, cross-validation, and intuitive heuristics preserve robustness. Here simplicity becomes virtue: as Harry Markowitz’s 50/50 portfolio shows, it’s often better to choose a clean, transparent rule than chase fragile, over-optimized plans. Gigerenzer’s “fast and frugal” heuristics echo this—simple solutions often outperform elaborate ones in uncertain worlds.
Relaxation, Randomness, and Creativity
When exact optimization is impossible—when problems explode combinatorially—you relax constraints. The authors introduce relaxation methods: loosening impossible rules, solving easier versions, then rounding or adjusting. This practical humility, used in the traveling-salesman problem or sports scheduling, teaches you to seek “good enough” solutions when perfection costs too much. Nonlinear real life demands relaxation just as hard problems do.
Randomness, far from being the enemy of reason, becomes its ally. Monte Carlo simulations approximate the uncomputable through sampling; the Miller–Rabin primality test shows that randomized algorithms can be both faster and surer than deterministic ones. Randomness also helps humans escape stagnation. In simulated annealing, accepting bad moves early allows you to avoid local optima—just as allowing mistakes or diversions fosters creativity. Artists like Brian Eno formalized this insight with Oblique Strategies cards, using randomness to provoke new patterns of thought.
Networks, Systems, and Society
In its later sections, the book zooms out to systems design: packet switching, exponential backoff, and AIMD keep the Internet stable and resilient. These algorithms embody social metaphors—graceful retreat after collision, modest growth after scarcity, fairness through randomization. Jim Gettys’s discovery of bufferbloat shows that optimizing one metric (throughput) while neglecting another (latency) breaks human experience. The fix isn’t more resources but smarter balance—just as in life, more bandwidth rarely fixes bad timing or misaligned priorities.
Algorithmic perspectives also reshape cooperation. Game theory originally modeled prediction—what rational agents will do—but becomes more powerful as mechanism design: changing the rules so that honesty or cooperation is the stable choice. From the Vickrey auction to workplace incentives, the authors argue that we should spend less effort anticipating every move and more designing institutions where good behavior is easy.
The Human Algorithm
The closing idea, computational kindness, brings the abstract full circle. Every time you propose a plan or send an email, you’re imposing a small computation on others—requiring them to search, evaluate, or optimize. Designing interactions so they minimize others’ mental work is an act of kindness. Offering two specific meeting times is algorithmically superior to saying “whenever works.” The lesson is that living algorithmically isn’t cold rationalism—it’s empathy powered by clarity. By applying these computational metaphors thoughtfully, you learn when to search, when to stop, when to simplify, and when to explore. The algorithms we write for machines, the authors insist, can also teach us how to be better humans.