The Coming Wave cover

The Coming Wave

by Mustafa Suleyman With Michael Bhaskar

The Coming Wave and Containment

How can you harvest breathtaking technological progress without inviting catastrophe or authoritarian control? In The Coming Wave, Mustafa Suleyman argues that two intertwined general-purpose technologies—artificial intelligence and synthetic biology—are catalyzing a supercluster of change that will remake economies, geopolitics, and daily life. His core claim is stark: proliferation is the default, containment is the exception, and unless you build a layered, society-wide containment system from the start, small groups will wield outsized power while states struggle to cope.

Suleyman frames history as a succession of technological waves—agriculture, writing, steam, electricity, computing—and says a new, more consequential wave is cresting. But invention is not the point; diffusion is. Once a technology becomes useful and cheaper, it spreads, recombines, and surprises. That’s why he asks you to focus on containment in the broadest sense—technical guardrails and lab practices, yes, but also corporate norms, regulation, international treaties, and civic culture that can throttle or shut down dangerous uses in time.

A supercluster: intelligence meets life

The book’s center of gravity is the fusion of AI and synthetic biology. If the past century moved from atoms to bits, now we move from bits to genes. AI makes life designable and faster (AlphaFold’s protein predictions, GPT-class tools that draft protocols), while synthetic biology gives AI physical leverage (CRISPR edits, DNA printers, lab automation). This combination is multiplicative, not additive: capabilities compound across disciplines and spill over into robotics, quantum modeling, and energy systems.

Four features that break past playbooks

Suleyman distills the containment challenge into four defining features—asymmetry, hyper-evolution, omni-use, and autonomy. Asymmetry lowers the resource bar for strategic effect (hobbyist drones in Ukraine, benchtop DNA printers). Hyper-evolution accelerates improvement so quickly governance lags by years. Omni-use means the same model or kit serves hundreds of benign and malign ends. Autonomy removes the human “brake,” letting systems act end-to-end. You can’t counter this with one rule; you need tailored, layered countermeasures.

Scaling AI and the rise of ACI

Modern AI exhibits a consistent pattern: scale up data, parameters, and compute, and qualitatively new abilities emerge. From DQN’s Atari tunneling trick to AlphaGo’s superhuman Go strategies and transformers powering GPT-3/4, Suleyman treats scaling as a working hypothesis. The frontier aim shifts from artificial general intelligence to ACI—artificial capable intelligence: systems that plan and execute complex, real-world goals (“Go make $1M on Amazon starting with $100k”) with minimal oversight. This reframes risk from far-off sci‑fi to near-term capability integration across the economy.

Incentives, state fragility, and surveillance

Why won’t restraint stick? Four forces—geopolitics, open science norms, profit, and ego—pull you toward relentless development. At the same time, the nation-state’s grand bargain (monopoly on force for public order) frays under cyber shocks (WannaCry, NotPetya), synthetic media, and automation strains. Surveillance plus AI, showcased by China’s Sharp Eyes, SenseTime, and Xinjiang’s data fusion, illustrate a seductive but dangerous “solution”: a panopticon that stifles freedom while promising security.

The trilemma you can’t dodge

Suleyman’s political diagnosis is a trilemma: avoid catastrophe (engineered pandemics, autonomous swarms, runaway automation) without sliding into dystopian surveillance or arresting progress into stagnation. There is no perfect answer—only a narrow path that tilts probabilities away from worst outcomes. That path, he argues, is a multi-decade containment program spanning labs, companies, states, and treaties, underwritten by a civic culture that values safety, transparency, and learning from failure.

Key Idea

Proliferation is inevitable; catastrophe is not. Your job is to embed containment into the very architecture of innovation—technical guardrails, institutional incentives, and international norms—before the wave crests.

In this guide, you’ll see how waves proliferate by default; why AI and programmable life change the substrate of invention; how asymmetry, hyper-evolution, omni-use, and autonomy magnify risk; how scaling drives ACI; why concentration and fragmentation of power happen together; how incentives and state fragility shape outcomes; and finally, what a ten-step containment playbook and technical safety essentials look like in practice. (Note: This approach echoes Carlota Perez’s techno-economic paradigm shifts and Asilomar’s biosafety ethos but centers on realpolitik incentives and engineering detail.)


Waves, Diffusion, Containment

Suleyman asks you to rethink “innovation” as waves—clusters of tools anchored by a general-purpose technology (GPT) that reshapes society. The internal combustion engine didn’t just make cars; it reorganized cities, logistics, and lifestyles. Likewise, computing didn’t only automate math; it rewired communication, finance, and culture. What defines a wave is how it proliferates—gets cheaper, more usable, and more widely recombined until its second- and third-order effects dominate the first.

Why proliferation is the default

Three forces make diffusion hard to stop. First, costs fall with scale and learning (think Ford’s Model T or solar panels). Second, demand expands as conveniences spawn new use cases—more users beget more features. Third, copying and competition accelerate spread; if an incumbent resists, a rival or open-source project fills the gap. Historical resistance—Luddites, Japan’s Sakoku, or Ottoman press bans—might delay but rarely derail a wave. Even speculative booms that blow up (the 1840s railway bubble) leave durable infrastructure that reshapes the economy.

Containment as a stack, not a switch

Containment is not one law or a red button. It’s a “full-spectrum” stack of technical, organizational, legal, and geopolitical layers designed to detect, limit, or stop harmful propagation. That stack includes safer lab practices and air gaps, corporate norms and audits, national regulation and licensing, and international treaties with credible verification. The nuclear regime—export controls, IAEA inspections, norms—is the partial exception that proves the rule: only because nukes are expensive, obvious, and terrifying did the world build robust nonproliferation; even then, near-misses and leaks abounded.

Predictable unpredictability

Waves are creative recombiners: their parts ripple outward, collide with others, and spawn surprise effects. The printing press empowered Reformation and science but also propaganda and censorship. Electricity enabled factory dynamism and urbanization but also new hazards. The coming wave will be more entangled and faster-moving, fusing software, biology, and energy. You should expect surprises and build buffers—audits, kill switches, and international hotlines—before surprises land.

Practical posture: assume spread, design brakes

For builders and regulators, the lesson is pragmatic: assume your invention, once useful, will get cheaper, leak, and propagate into adversarial settings. Design with that in mind. That means open publication may require redaction or delayed release; models may ship with usage constraints and cryptographic locks; labs may require dual-control for risky equipment. Treat containment not as anti-innovation but as infrastructure that keeps the wave net-positive.

Hard Lesson

History suggests containment usually fails unless it’s multilayered, globally coordinated, and tied to incentives. Plan for diffusion—and engineer your systems so diffusion doesn’t equal disaster.

(Note: This systems framing complements Joseph Henrich on cumulative culture and Carlota Perez on techno-economic “surges,” but Suleyman sharpens it into a governance to-do list: build the layers now, or the wave will build itself without you.)


Intelligence Meets Life

Suleyman’s thesis crystallizes around a supercluster: artificial intelligence converging with synthetic biology. If the 20th century was atoms-to-bits, the 21st adds genes. Information doesn’t just describe reality; it becomes the substrate of life itself. AI reads, predicts, and designs biological structures; synthetic biology writes, edits, and manufactures them. The loop closes: intelligence programs life; life amplifies intelligence’s reach into the physical world.

From atoms to bits to genes

Computing abstracted atoms into bits you could store, transmit, and compute. Now tools are abstracting genes with similar flexibility. Sequencing costs plummeted along the Carlson curve; CRISPR-Cas9 turned editing into a routine technique; benchtop DNA printers bring synthesis closer to your lab bench. AI systems like DeepMind’s AlphaFold moved protein folding from years of lab work to instant predictions—AlphaFold’s release of ~200 million structures transformed discovery workflows. The result is programmable life cycles measured in days, not years.

Multiplicative effects across domains

When two general-purpose technologies meet, you don’t add—you multiply. AI accelerates biological R&D (protein design, pathway optimization, lab robotics), while synthetic biology yields new materials, medicines, and organisms optimized by AI. Examples pile up: AI-assisted drug discovery (the book notes Exscientia), precision-edited crops, and enzyme-driven green manufacturing. On the flip side, the same tools can design toxins or tweak pathogens. Omni-use collides with dual-use, making governance fundamentally harder than with single-purpose machines (like the combustion engine).

From research to manufacturing

Biology shifts from discovery to design and then to manufacturing. DNA foundries can print millions of sequences in parallel; cloud labs and standardized protocols shrink iteration loops. Think of it as “full-stack biology”: design in silico, print on demand, test with automation, scale in bioreactors. AI is the planner and optimizer; synthetic biology is the factory. That factory can do medicine, materials, food, and carbon capture—massive upsides if governed well.

Containment implications

Because AI and synthetic biology are both general-purpose, you face a governance paradox: you want maximal availability for good applications but maximal restriction for dangerous ones. The answer lies in chokepoints (screen DNA synthesis orders, license high-risk tools), technical controls (access logs, cryptographic keys for model weights, sandboxed execution), and norms (responsible publication, red-teaming protocols). Global coordination is essential; biology and models cross borders via data and molecules alike.

Key Idea

AI doesn’t just help you think; it helps you make. Synthetic biology doesn’t just make; it lets intelligence design the living world. That loop is the engine—and the risk—of the coming wave.

(Parenthetical note: This pairing echoes Stewart Brand’s “we are as gods” warning about biotech and Nick Bostrom’s dual-use concerns, but Suleyman grounds it in today’s concrete tools—CRISPR kits, DNA Script printers, AlphaFold APIs—now moving from labs into mainstream industry.)


The Four Features

Suleyman compresses the wave’s destabilizing essence into four features: asymmetry, hyper-evolution, omni-use, and autonomy. Use this as a diagnostic checklist: when a new tool scores high on several, you should expect rapid diffusion, hard-to-predict effects, and a tougher containment job.

Asymmetry: outsized power for small actors

Asymmetry flips traditional threat models. You no longer need a state arsenal to change strategic outcomes. Ukraine’s Aerorozvidka showed how small teams with hobbyist drones, open software, and Starlink bandwidth can blunt conventional power. In biology, benchtop synthesizers and online protocols let tiny labs attempt feats once reserved for national institutes. Containment response: raise barriers on critical steps (e.g., synthesis screening), mandate licensing, and invest in rapid defense tools that scale down as far as offense does.

Hyper-evolution: speed compresses governance

Progress compounds. Language models leapt from preprints to ChatGPT adoption in weeks; sequencing costs plunged orders of magnitude; compute budgets for training runs exploded. Regulatory calendars—committee hearings, impact assessments—can’t match this cadence. Containment response: preemptive standards, model release gates with audits, and dynamic licensing that updates with capability metrics rather than fixed categories.

Omni-use: generality blurs boundaries

A single LLM drafts marketing copy, writes code, composes legal notes, and suggests lab steps. A synthetic biology platform grows meat or designs toxins. This generality defeats one-off, use-case-specific rules. Containment response: regulate by capability and context (e.g., chemistry-design features, lab integration), pair open access with restricted plugins, and require provenance/watermarking for high-risk outputs to manage downstream harms like disinformation.

Autonomy: humans leave the loop

Autonomous systems act, iterate, and scale at machine speed, erasing human deliberation time. From drone swarms and algorithmic trading to end-to-end ACI agents, autonomy amplifies both error and malice. Containment response: enforce human-in-the-loop for high-stakes actions, implement robust off switches, and require simulators/red teams to probe failure modes before live deployment.

Core Insight

Each feature is hard; together they are multiplicative. Small actors gain leverage; the leverage improves rapidly; the same tools serve everywhere; and the tools act on their own. That is the containment problem.

(Note: Think of this as a practical lens akin to Bruce Schneier’s “attackers have the advantage,” but broadened to socio-technical dynamics. It guides what countermeasures to prioritize and where to invest scarce governance attention.)


Scaling To ACI

Modern AI’s arc reveals a simple, unsettling rule: scale delivers capabilities that look like qualitative leaps. Suleyman walks through milestones to show why: DQN learned Atari and discovered strategies (Breakout tunneling) no one hand-coded; AlphaGo’s victories over Lee Sedol and Ke Jie rewrote intuitions about search, learning, and creativity; AlexNet’s 2012 ImageNet win ignited deep learning; transformers (2017) made large language models possible, yielding GPT-3’s fluent generation and GPT-4’s multimodal reasoning that millions use via ChatGPT.

The scaling hypothesis in practice

As parameters, data, and compute rise, emergent abilities appear—coding assistance (GitHub Copilot), chain-of-thought reasoning, non-trivial math, and tool use via APIs. Open-source diffusion (Stable Diffusion, LLaMA leaks) spreads frontier ideas widely, while products embed models everywhere—search, office suites, customer support. Suleyman’s view isn’t metaphysical; it’s empirical: bigger, better-trained models do more, sooner than expected.

ACI: artificial capable intelligence

Suleyman reframes the target from AGI abstractions to ACI—agents that can plan and execute multi-step, real-world tasks end-to-end. His “Modern Turing Test” example is concrete: tell an ACI to turn $100k into $1M on Amazon, and within a few years it may orchestrate market research, supplier outreach, design, manufacturing, advertising, and logistics via existing APIs and marketplaces. This is autonomy plus omni-use in the wild.

Today’s signals and failure modes

You already see pieces of ACI: DeepMind’s data-center cooling optimization, WaveNet’s voice synthesis, Copilot’s coding productivity, and agentic frameworks chaining tools. But you also see fragility: hallucinations, bias, prompt injection, jailbreaks, and opaque reasoning. Suleyman treats these as solvable engineering problems—through uncertainty calibration, better supervision, and evaluation—while stressing the larger governance stakes once capable agents become cheap and ubiquitous.

Containment for scaling eras

Rapid scaling calls for capability-threshold governance: when models cross specific safety or power levels, new obligations kick in—independent red-teaming, incident disclosure, restricted plugins, and cryptographic controls on model weights. Hardware chokepoints (advanced chips, lithography) can buy time. And cultural defaults—like cautious agents (Inflection’s Pi), watermarking, and conservative action policies—should anchor deployments in high-stakes domains.

Key Idea

The near-term risk is not godlike AGI but very capable AI doing very real things at scale. ACI changes the economy’s muscle memory—so your containment must evolve from research lab safeguards to infrastructure-grade oversight.

(Parenthetical note: This shifts emphasis from long-horizon AI doom scenarios toward the messy, immediate challenges of integrating agentic systems into logistics, finance, and critical infrastructure.)


Programmable Life

Biology is becoming a design discipline. Reading (sequencing), writing (synthesis), and editing (CRISPR) have moved from rarefied labs into standard toolkits. AI accelerates each step. Suleyman shows how this shift turns life into a platform—programmable, composable, and manufacturable. The upside: new cures, carbon-negative materials, resilient food systems. The downside: dual-use risk where the same tooling can make harmful agents or amplify accidents.

CRISPR and democratized editing

CRISPR-Cas9, pioneered by Doudna and Charpentier, turned gene editing into a precise, affordable method. It already yields edited crops and therapies for diseases like sickle-cell. But Lulu and Nana—the CRISPR-edited babies in China—exposed governance gaps: rogue actors can leap ahead of consensus, and oversight lags tooling. The lesson is not to halt CRISPR but to bind it into licensing, ethics review, and auditable lab protocols.

Synthesis, printers, and foundries

DNA printers and foundries make writing genomes easier and faster. Enzymatic synthesis, benchtop devices (e.g., DNA Script), and cloud-scale printing push iteration cycles from months to days. Combined with standardized lab automation, this is “DevOps for biology.” It empowers startups and universities—great for medicine and materials, but powerful enough to be misused without safeguards.

AlphaFold and computational design

AlphaFold’s ~200 million protein structure predictions shifted protein science from painstaking empiricism to computation at scale. This makes de novo protein design plausible for small teams and aligns with AI-driven molecule generators. As in AI, open access accelerates good work and risk. Curation and phased releases—powerful but safe—become part of responsible science.

Accidents, GOF, and surveillance

The record isn’t spotless: Russian flu (1977), Pirbright foot-and-mouth, and multiple SARS lab leaks illustrate how containment fails even in advanced settings. Gain-of-function research, while often well-meaning, raises catastrophic tails. Suleyman advocates global pathogen surveillance, higher biosafety baselines, and synthesis screening (e.g., SecureDNA) to catch dangerous orders before molecules exist.

A safety-first bio infrastructure

Containment in bio means end-to-end trust: vetted users, logged equipment access, screened designs, layered physical containment (BSL-3/4 where appropriate), and international alert networks. Publication norms may require redacted methods; funding may hinge on safety plans. These controls aren’t anti-science; they are what make continued progress politically and morally sustainable.

Key Idea

As biology becomes software-like, your safeguards must become software-like too—continuous monitoring, identity and access management, versioning, and rollback—plus the lab’s physical redundancy and discipline.

(Note: This echoes Asilomar’s recombinant DNA compact, updated for cloud labs and AI co-pilots, where every improvement in capability should pair with a commensurate upgrade in containment.)


Convergence And Power

The coming wave radiates into robotics, quantum computing, and energy—each amplifying AI and synthetic biology. At the same time, power both concentrates in megacorporations and fragments to micro-actors, creating a paradoxical, unstable landscape. You must see both dynamics to design realistic containment.

Robotics: AI gets a body

Robots move from single-purpose arms to generalist helpers. John Deere’s autonomous tractors, Amazon’s Proteus and Sparrow systems, and dexterous manipulation research show automation leaving factories for farms, warehouses, and homes. Swarming and coordination unlock new capabilities—and risks. The Dallas police’s Remotec Andros robot delivering explosives foreshadows how policing and warfare norms shift when robots act with lethal force.

Quantum and energy: accelerants

Quantum claims (e.g., Google’s supremacy experiment) portend breakthroughs in optimization, cryptanalysis (Q-Day risk), and molecular simulation that would supercharge drug and material design. Meanwhile, plunging solar costs and fusion milestones (NIF’s net gain; a crop of private fusion startups) make abundant clean energy plausible. Cheap power feeds compute, bioreactors, and robotics—accelerating the wave across domains.

Concentration: mega-firms as proto-states

Apple, Google, and Samsung accumulate chips, cloud, data, and talent—compounding returns that widen an “intelligence gap.” Platforms create feedback loops: better models attract more users, yielding more data, which improve models. With user bases rivaling nations and infrastructure spanning the globe, these firms can set de facto standards and policies. (Parenthetical: The comparison to the East India Company underlines how corporate power can become geopolitical.)

Fragmentation: Hezbollahization and open tools

The same wave lowers barriers for micro-actors. Hezbollah exemplifies a hybrid actor—services, military power, governance—inside a state. In the future, communes, cartels, or platforms could offer schooling, health, security, and currency. Open-source models (Hugging Face ecosystems, Stability AI) and leaked weights (LLaMA) democratize capability. “Neo-medieval” is Suleyman’s term for a world where techno-feudal giants coexist with hundreds of empowered mini-polities.

Policy for a two-front problem

You can’t fix this with antitrust alone, nor with blanket bans. You must deter harmful diffusion (licensing, audits, chokepoints) while checking monopoly power (interoperability, data portability, compute access rules). Large firms should shoulder higher safety obligations; open ecosystems need guardrails that prevent weaponization. Both centripetal and centrifugal forces need attention—simultaneously.

Key Idea

The wave centralizes the means of production and decentralizes the means of disruption. Containment must constrain both monopolies and malign micro-actors—two fronts, one strategy.

(Note: This complements Shoshana Zuboff’s surveillance capitalism critique by adding a second axis: the open-source and small-actor empowerment that undermines one-size-fits-all regulatory fixes.)


Races, Fragility, Trilemma

Even if you accept the need for guardrails, you collide with incentives. Suleyman maps four drivers that make restraint rare: geopolitics, open science, profit, and ego. These collide with fragile states strained by cyber shocks, synthetic media, and social upheaval. The political endgame is a trilemma—catastrophe, dystopia, or stagnation—unless you build a layered containment regime that earns public trust and international backing.

Unstoppable incentives

Geopolitically, AlphaGo’s triumph was China’s AI Sputnik moment—fueling a 2030 AI plan, quantum satellites like Micius, and massive genomics efforts (BGI). The U.S., EU, India and others race, too. Open science norms (arXiv, GitHub, bioRxiv) push rapid diffusion; the LLaMA weight leak showed how swiftly frontier capability can spread. Profits in the trillions drive VC and corporate R&D; founders and scientists chase legacy and discovery. Together these create a nested collective-action problem: unilateral restraint looks irrational.

State fragility and surveillance temptation

WannaCry and NotPetya revealed how elite cyberweapons (EternalBlue) can leak and cause global damage—crippling hospitals (NHS), banks, and logistics. Synthetic media corrodes trust: a deepfake of Indian politician Manoj Tiwari and manipulated Nancy Pelosi footage illustrate how LLM-era misinformation scales. Economic automation strains labor markets and social contracts. In response, surveillance plus AI—China’s Sharp Eyes, SenseTime, Megvii, Xinjiang’s biometric databases—promises order. The same tech exports globally and seeps into Western contexts via corporate telemetry and CCTV ubiquity.

The great trilemma

• Catastrophe: engineered pathogens, autonomous drone swarms, or algorithmic infrastructure failures. (Aum Shinrikyo’s near-miss portends how small groups might do far worse now.)
• Dystopia: an AI-tocracy of constant data fusion, predictive policing, and preemptive suppression—security traded for dignity.
• Stagnation: freezing tech to avoid risks and losing the tools needed to solve climate, health, and demographic crises.

A layered containment playbook

Suleyman’s prescription spans ten interlocking steps: (1) an Apollo-scale safety push for AI and biosafety; (2) independent audits and red teams, plus public incident databases; (3) chokepoints like semiconductor export controls to buy time; (4) makers who embed safety-by-design; (5) responsible business models and governance structures; (6) competent, technically staffed governments with licensing regimes; (7) alliances and treaties akin to nonproliferation and the Montreal Protocol; (8) a culture that admits and learns from failure (aviation’s model); (9) public movements that demand accountability; and (10) constant coherence—ensuring all layers reinforce each other.

Technical safety essentials

Under the hood, containment starts with engineering: air-gapping and boxing powerful systems; robust off switches and cryptographic controls on model weights; scalable supervision and formal constraints; independent red teaming before deployment; calibrated uncertainty and traceable citations; and mandatory human-in-the-loop for high-risk actions. In bio, SecureDNA-style synthesis screening, identity and access controls, and standardized logging form the equivalent backbone.

Key Idea

You can’t legislate your way out of a physics-and-incentives problem. Only a synchronized mesh—technical guardrails, business incentives, competent states, and credible treaties—can steer between catastrophe, dystopia, and stagnation.

(Note: Think Montreal Protocol meets aviation safety culture meets semiconductor chokepoints—stitched together for AI and synthetic biology’s speed and scope.)

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.