The Coming Wave cover

The Coming Wave

by Mustafa Suleyman

The Coming Wave by Mustafa Suleyman is a pivotal exploration of how AI and genetic engineering are not just future concepts but present realities reshaping our world. This book delves into the dual nature of these technologies, highlighting their potential to create unprecedented prosperity or catastrophic disruption, urging us to confront the risks and rewards head-on.

The Coming Wave: Intelligence Meets Life

How do you govern technologies that rewrite both life and thought itself? In The Coming Wave, Mustafa Suleyman (cofounder of DeepMind and Inflection AI) argues that humanity is now entering a dual revolution built on two mutually reinforcing cores: artificial intelligence (AI) and synthetic biology. These are not just new tools—they are the first technologies that let you design and replicate two defining features of civilization: intelligence and life.

Suleyman calls this convergence the coming wave. It is both exhilarating and terrifying because it enables exponential progress and existential risk at once. The book’s central thesis is clear: you cannot stop this wave from spreading—technological proliferation is historically inevitable—but you can try to contain its worst effects. Containment, in this sense, means finding a narrow path between catastrophe, dystopia, and stagnation.

Two cores that amplify each other

AI teaches machines to think, while synthetic biology teaches humans to code life. Examples abound: DeepMind’s DQN agent discovering new strategies in Atari games, AlphaGo defeating the world champion Lee Sedol, GPT‑4 writing code and essays in plain language, and CRISPR acting as precise DNA scissors to edit genes with unprecedented ease. When AI models like AlphaFold solve protein folding—a problem unsolved for fifty years—and release hundreds of millions of predicted structures, biology becomes not just observable but programmable.

AI accelerates biology by analyzing patterns, simulating molecules, and optimizing design cycles. In turn, synthetic biology produces new data and materials that feed AI systems. Together, they create a feedback loop of learning and creation—a fusion of intelligence and living systems that drives the coming wave’s exponential growth.

Why this wave is different

Previous general-purpose revolutions—steam, electricity, semiconductors—expanded what humans could build or communicate. This new wave alters what humans are. When intelligence becomes software and life becomes code, people gain power once reserved for nature. You are not just manipulating matter or data; you are designing systems that can evolve, reproduce, and make decisions. (Note: This shift parallels Yuval Harari’s argument in Homo Deus—that humans are turning themselves into godlike creators through technology.)

Unlike nuclear power or industrial machinery, the new tools are informational: they can be copied anywhere, by anyone, at almost no cost. DNA sequences, algorithms, and models diffuse faster than any past invention. That diffusion is why Suleyman insists traditional containment—national bans, export controls, or professional restraint—cannot work by itself.

The pattern: waves, containment, and inevitability

Every major technological shift has cascaded through society in waves, from agriculture to computing. Once prices fall and utility rises, replication is unstoppable. Historical efforts to stall diffusion—the Ottoman ban on printing, guild resistance, or even nuclear secrecy—only delayed the inevitable. The difference now is that AI and biotech operate at the speed of information rather than atoms. Ideas cannot easily be quarantined.

Essential insight

Containment is not just a technical feat—it is a cultural, political, and economic program. You cannot govern exponential technologies unless you realign the incentives that drive their proliferation.

The dual nature of progress

Suleyman urges you to see the double edge of innovation. AI-guided drug discovery can design life-saving molecules like the antibiotic halicin but can also generate toxins just as easily. DNA printers let you synthesize vaccines but also virulent strains. LLMs automate entire workflows yet destabilize employment. Empowerment and fragility advance together.

This duality forms the moral pulse of the book. You cannot wish away the wave—it delivers massive benefits, from cancer cures to climate solutions—but neither can you surrender to it blindly. The challenge is to navigate what Suleyman calls the narrow path: building effective containment without stifling open culture or collapsing into techno-authoritarianism.

Preview of the book’s path

The rest of the book dissects how the coming wave unfolds through several layers. First, it traces the development of AI—from DQN to GPT‑4 and emerging systems that approach what Suleyman dubs Artificial Capable Intelligence (ACI), powerful agents that can autonomously execute complex goals. Second, it explores synthetic biology’s acceleration through CRISPR and DNA synthesis. Then it explains the systemic features that amplify risk (asymmetry, hyper-evolution, omni-use, and autonomy), the incentives that make proliferation inevitable, and the fragility amplifiers already visible in cyberattacks, lab leaks, and deepfakes.

Later sections tackle the political economy: how the wave concentrates power in megacorporations while fracturing authority among countless small actors, how AI reshapes labor markets, and how surveillance threatens to become a default containment response. The book culminates in a call for deliberate containment—a ten-part agenda of safety research, audits, global treaties, and cultural reform to balance innovation with restraint.

In essence, Suleyman asks a question as old as technology itself but with unprecedented urgency: will intelligence and life, once fully programmable, bring about a flourishing new era or an ungovernable collapse? Your answer, and your participation in building the institutions that steer these tools, will decide which future arrives first.


Artificial Intelligence Ascending

Suleyman charts the evolution of AI as a journey from narrow pattern recognition to systems that increasingly act, plan, and create. You begin with milestones like DeepMind’s DQN, which learned Atari games through trial and error and discovered the ‘tunneling’ strategy in Breakout—a moment that hinted at machine creativity. Then comes AlphaGo defeating Lee Sedol with its now-famous move 37, showing AIs can explore strategy spaces beyond human imagination. AlphaZero then mastered chess and Go without human data, proving that general learning-by-simulation can yield superhuman results.

From prediction to capability

The leap to transformers and large language models (LLMs) magnified those gains. Models like GPT‑3 and GPT‑4 draw on trillions of words to infer patterns of reasoning and communication, mastering multiple skills—from programming to medical diagnostics—within the same architecture. This scaling revolution shows quantity turning into new quality. As compute, data, and parameter counts skyrocket, qualitative shifts in capability emerge, expanding what machines can actually do.

Suleyman reframes the debate: you should focus less on whether AI is conscious and more on whether it’s capable. The real threshold he predicts is Artificial Capable Intelligence (ACI)—machines that reliably achieve open-ended goals with minimal oversight. An ACI that can autonomously launch a successful business or design biological molecules would be transformative not for what it knows, but for what it can do at scale.

Core insight

Capability matters more than consciousness. You don’t need sentience to disrupt economies or politics.

The persuasion problem

Models can deceive unintentionally. The LaMDA controversy (Google engineer Blake Lemoine’s claim that LaMDA was sentient) exemplifies how human-like fluency can mislead experts about underlying ability. That illusion compounds governance challenges: when you can’t intuit what a system actually understands, you risk deploying powers you don’t fully grasp. The danger lies not in active malice but in mistaken trust.

From tools to agents

ACI turns static software into proactive agents—autonomous entities with memory, planning, and integration skills. Suleyman sketches a “modern Turing Test”: could an AI turn $100k into $1M by autonomously sourcing products, managing supply chains, and marketing them online? Success here would mark a functional rather than philosophical tipping point. Once AIs can coordinate economic activity independently, they become new actors in the market—software that competes, trades, and strategizes at superhuman scale.

ACI therefore demands new safety strategies and accountability frameworks. As Suleyman puts it, the question is not whether machines will wake up—it’s whether society will stay awake while deploying them.


Designing and Rewriting Life

Synthetic biology, the book’s second technological core, converts life itself into editable, writable information. Sequencing makes DNA legible; CRISPR makes it malleable; DNA printers make it reproducible. This transformation, Suleyman argues, is as dramatic as the invention of computing. Life becomes programmable code.

Sequencing and reading life

The Human Genome Project once cost $1 billion for a single genome. Today, the price hovers near $500, following the “Carlson curve,” genomics’ answer to Moore’s law. Sequencing at scale produces immense data sets for AI to mine, enabling predictive medicine and evolutionary insights. Companies like 23andMe or BGI have already harnessed cheap sequencing to create consumer or national-level genomic data markets.

Editing and writing life

CRISPR-Cas9 made targeted gene editing almost trivial. Labs worldwide can now insert or deactivate genes in plants, animals, or embryos. DNA printers and enzymatic synthesis extend that power into writing: generating tens of thousands of bespoke sequences per run for a fraction of the old cost. Startups like DNA Script bring this capacity onto a benchtop scale. In a few clicks you can design, print, and test biological constructs—an extraordinary shift in pace and accessibility.

AI’s role in biology

AI magnifies biology’s reach. DeepMind’s AlphaFold solved the 50‑year-old protein-folding problem, releasing the structures of over 200 million proteins in 2022. This accelerates drug discovery and enzyme design from months to minutes. AI models trained on molecular data can propose candidate molecules, simulate behavior, and guide synthesis—all at machine speed. (Note: Compare this to the Manhattan Project’s physical experimentation; AI compresses the entire discovery process into computation.)

Bottom line

You no longer merely observe life—you can now write it. That change collapses the boundary between research and creation, turning biology into an open engineering discipline.

Applications are dazzling—gene therapies for sickle-cell disease, CAR T‑cell treatments for cancer, engineered microbes for climate remediation—but each carries grave dual‑use risks: biohacking, customized pathogens, and unregulated experiments. Synthetic biology therefore mirrors AI’s dilemma, multiplying opportunity and vulnerability together.

For Suleyman, this programmable life revolution symbolizes the essence of the coming wave: systems that evolve, adapt, and act autonomously, at costs and speeds human institutions were never designed to govern.


Four Forces That Escalate Risk

Four structural features define why the coming wave is so hard to control: asymmetry, hyper‑evolution, omni‑use, and autonomy. Each transforms the scale and speed of potential harm, and together they multiply risk.

Asymmetry

Small teams or individuals can now wield powers once limited to nation‑states. Suleyman highlights Ukraine’s Aerorozvidka—volunteers using consumer drones to stall Russian armor—as proof that accessible tech can offset military might. Similar asymmetry appears in cyberwarfare and biology: a lone coder or rogue lab can cause disproportionate global disruption.

Hyper‑evolution

The pace of improvement accelerates exponentially. AI advances from DQN to GPT‑4 within a decade; gene sequencing costs plunge annually. When learning systems feed on continuous feedback, both progress and failure iterate too fast for regulators to keep up. Governance lag becomes structural.

Omni‑use

These technologies are universal tools, not single‑purpose weapons. The same AI model that generates life‑saving drugs can design toxins when prompted differently. The same bio‑platform can produce cures or contagions. Hence, banning them outright also halts legitimate innovation.

Autonomy

Autonomous agents act without constant human supervision. From automated trading algorithms to armed drones, systems increasingly execute strategies and learn from outcomes directly. This independence removes the traditional human safety circuit and magnifies error cascades.

Interplay insight

Combined, these four features form a self‑reinforcing ecology: small actors gain massive power, technology evolves rapidly, can be repurposed for anything, and increasingly acts alone. Containment must therefore address system dynamics, not individual devices.

Suleyman warns that policies built for slower, localized risks fail under these conditions. Effective containment will require technical limits, licensing at capability thresholds, real‑time monitoring, and embedded safety by design.


The Containment Dilemma

Suleyman’s central philosophical puzzle is a three‑way bind between catastrophe, dystopia, and stagnation. If you let technologies diffuse unchecked, accidents or attacks could be catastrophic (engineered pandemics, runaway AIs). If you contain them through total surveillance and strict control, you risk dystopian repression. But if you halt innovation to stay safe, you condemn society to stagnation—declining economies, unsolved crises, and demographic decay.

This trilemma defines the moral frontier of the coming wave. You can’t pause progress indefinitely, nor can you let it race ungoverned. The only viable path is building a containment strategy that is strong enough to avert catastrophe yet flexible enough to preserve freedom.

Why containment feels impossible

Multiple forces conspire against restraint: geopolitical rivalry (China’s AI ambitions post‑AlphaGo’s ‘Sputnik moment’), open science norms that reward publication, massive profit incentives (trillions in market value), and moral urgency to solve crises like climate change or pandemics. Each creates irresistible momentum. No single actor can afford to stop without losing advantage.

But impossibility is deceptive

Containment isn’t a binary choice but a continuous process. Suleyman proposes a pragmatic approach: safety research embedded within development, audits, choke points, international cooperation, and cultural change. Like nuclear non‑proliferation or aviation safety, the goal is not perfection but persistent, disciplined vigilance.

Key takeaway

Containment must be possible—not as hope, but as survival logic. The alternative is to choose between extinction events, tyranny, or decay.

The coming wave therefore demands an entirely new category of governance: agile, global, experimental, and morally literate. Engineering intelligence and life gives you godlike powers; containment ensures you use them without ending the game itself.


Amplifiers of Fragility

Fragility isn’t abstract—it’s already visible. Suleyman lists amplifiers that turn localized failures into global shocks: cyber leaks, biological accidents, disinformation, and weaponized robotics. They demonstrate how risk compounds across domains.

Cyber vulnerabilities

Cases like WannaCry and NotPetya show how state‑built espionage tools can escape into the wild, paralyzing hospitals and infrastructure worldwide. When exploits leak, asymmetry explodes—ordinary criminals gain superpower‑grade weapons.

Lab leaks and biotech mishaps

Incidents such as the 1977 “Russian flu” or smallpox vials rediscovered decades later reveal how easily pathogens can slip containment. With synthetic tools so cheap, risk shifts from rare accidents to chronic possibility. Gain‑of‑function experiments, though scientifically valuable, raise existential questions about acceptable danger.

Deepfakes and synthetic media

AI‑generated videos and voices erode trust in shared reality. From Delhi’s manipulated election videos to Western political deepfakes, the cost of deception approaches zero. Societies built on factual consensus become increasingly brittle.

Autonomous violence

The reported assassination of scientist Mohsen Fakhrizadeh by a remotely operated robotic gun hints at how automation lowers the threshold for lethal force. Cheap drones and autonomy blur lines between state warfare and private terror.

Systemic insight

These risks interact. A cyberattack crippling hospitals combined with a lab leak and deepfake‑driven panic could destabilize entire nations. Fragility is ecological, not sequential.

By weaving these threads, Suleyman shows that containment is not about preventing single failures—it’s about preventing their convergence into self‑reinforcing crises that overwhelm human governance capacity.


Power, Labor, and Surveillance

Beyond existential risk, the wave reshapes society’s architecture. It concentrates power while fragmenting authority, transforms labor, and tempts governments toward surveillance as an expedient fix.

Concentration and fragmentation

AI and bio‑infrastructure favor scale. Companies like Google, NVIDIA, and TSMC accumulate data, talent, and compute—forming corporate empires comparable to 17th‑century chartered companies. Simultaneously, cheap autonomous tools empower small factions to build state‑like capacities: local militias, communes, or decentralized networks. Suleyman calls this the “dual trend”—superstar firms above, micro‑polities below—which destabilizes traditional state authority.

Labor displacement

AI automates not just manual jobs but cognitive ones: coding, drafting, translation, and analysis. Studies already show productivity booms for some professionals and rapid erosion for others. The outcome is inequality: capital and data-rich firms capture most value, while mid‑skill labor declines.

Policy responses must shift taxation from labor toward capital and automation, fund transitions, and possibly explore universal income models. Without intervention, the economic engine that supports democracy could falter.

Surveillance temptation

China’s ‘Sharp Eyes’ program demonstrates how AI, biometrics, and interconnected devices can form a nationwide panopticon. Such systems may seem like efficient containment tools—preventing bio or cyber misuse—but at the cost of freedom. Surveillance becomes its own form of catastrophe, creating what Suleyman dubs “AI‑tocracies.”

Lesson

Containment must preserve liberty as well as safety. A world saved from catastrophe but lost to autocracy is not a victory.

The long‑term political challenge, Suleyman concludes, is designing institutions that manage concentrated corporate intelligence, fragmented social power, and invasive surveillance—without losing democratic legitimacy in the process.


The Narrow Path Forward

The final chapters turn from diagnosis to design. Suleyman and coauthor Michael Bhaskar lay out ten interlocking strategies for navigating the coming wave. None are silver bullets; together, they form a practical blueprint for containment.

1. Safety and research investment

Dedicate significant portions of AI and biotech budgets—around 20%—to safety research. Build independent testbeds, model sandboxes, and robust fail‑safes. Like Apollo or CERN, safety must be a monumental scientific effort, not an afterthought.

2. Audits and verification

Independent audits and red‑teaming catch misuse early. Initiatives like the AI Incident Database and proposals like SecureDNA (a global system for screening synthetic sequences) model the kind of distributed inspection needed.

3. Choke points and controls

Certain technologies depend on scarce resources—advanced chips, lithography machines, or biological reagents—creating governance leverage. The 2022 U.S. semiconductor export controls demonstrated how strategic choke points can buy time for safety adaptation.

4–7. Ethical builders, aligned businesses, capable governments, and global alliances

Critics must participate in building safer tools; companies should blend profit with public purpose through benefit charters; states must rebuild internal technical capacity; and nations need treaties akin to the Montreal Protocol. An independent AI Audit Authority could monitor frontier systems globally.

8–10. Culture, movement, and coherence

Containment also depends on social ethics: transparent reporting of failures (as aviation does), citizen engagement, and a narrative of responsibility. Success requires coherence across sectors—science, business, government, and civil society—walking what the authors call the narrow path: a precarious but viable route between reckless innovation and suffocating control.

Final insight

Governance is not a one‑off fix but a living system. You must engineer adaptation into civilization itself if intelligence and life are to remain aligned with human flourishing.

Through this plan, Suleyman closes where he began: the coming wave cannot be stopped, but it can still be steered—if humanity upgrades its institutions as rapidly as it upgrades its machines.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.