The Big Nine cover

The Big Nine

by Amy Webb

The Big Nine unveils the intricate world of AI, revealing the powerful forces shaping its future. Amy Webb explores the dual-edged potential of AI advancements, highlighting the urgent need for strategic interventions to prevent catastrophic outcomes and ensuring a future where AI serves humanity''s best interests.

AI’s Trajectory and the Two Power Tracks

How can you shape the future of artificial intelligence when it’s already shaping you? In The Big Nine, Amy Webb argues that AI’s evolution is neither accidental nor universal. It is being directed by two distinct geopolitical and cultural power tracks: the U.S.‑centric, market‑driven G‑MAFIA (Google, Microsoft, Amazon, Facebook, IBM, Apple) and the Chinese, state‑aligned BAT (Baidu, Alibaba, Tencent). Each bloc operates under different incentive systems—one guided by shareholder returns, the other by centralized industrial policy—and those differences are rewriting economic, social, and ethical norms around the world.

Two models, two futures

In the United States, AI is treated as a commercial product. The G‑MAFIA are rewarded for fast iteration, not careful governance. Webb calls this a nowist mindset: short‑term thinking that favors investor confidence and user growth at the cost of long‑term civic health. You live this every time your feed updates, your ad preferences are optimized, or privacy concessions are quietly folded into new features. In contrast, China’s BAT operate within a long‑range, state‑coordinated vision. The government’s 2030 AI Development Plan fuses national security, economic planning, and social management. Citizens’ data—gleaned through platforms like WeChat and ET City Brain—fuel integrated surveillance and governance systems.

These tracks are not just national differences; they are competing philosophies of what it means to be human in a data‑driven world. The G‑MAFIA turn behavior into commercial prediction. BAT systems convert behavior into compliance metrics. Both demand your data, but for opposite reasons: profit versus control.

The stakes for citizens and governance

This bifurcation matters because it shapes the rules you live under whether or not you notice. If you live within U.S. systems, algorithms quietly nudge you for engagement—likes, clicks, purchases. Under China’s systems, AI optimizes for social harmony and policy compliance. Yet in the global economy you already straddle both worlds: your phone, cloud storage, supply chains, and travel data touch both ecosystems. These systems affect how loans are approved, who is hired, which stories trend, and how health decisions get automated.

Why Webb calls for foresight

Webb warns that short‑term market logic and authoritarian optimization each lead to dangerous lock‑ins. The West risks monopolistic complacency; China risks digital authoritarianism. Without intervention, both paths converge toward diminished human agency—one through addiction to convenience, the other through dependence on state infrastructure. To bridge this, Webb proposes global coordination, calling on governments and companies to treat AI as a public good rather than just a product or weapon.

Preview of what follows

The chapters that follow expose why AI systems inherit the blind spots of their creators, why hardware and infrastructure matter as much as algorithms, and how fragile systems can generate cascading harms. Webb moves from history and culture to possible futures—showing that between utopia and catastrophe lies a range of plausible, human‑guided outcomes. The responsibility, she insists, lies with you and your institutions: to question whose values are encoded, demand transparency, and change the incentives behind the code.

Core claim

AI is not an autonomous runaway technology; it is the aggregate expression of human priorities and politics. Until we design governance and educational systems that reflect diverse, long‑term human values, AI will continue to mirror—and magnify—the inequality, bias, and short‑termism of those who build it.


How the AI Tribe Shapes the Code

You might assume bias in AI results from faulty data or incomplete algorithms. Webb argues that the deeper source is sociological: a homogeneous AI tribe that designs systems in its own image. From the 1956 Dartmouth Conference onward, AI research drew heavily from elite, male‑dominated institutions—Stanford, MIT, Carnegie Mellon—and this homogeneity persists across Big Tech. Conway’s Law explains the symptom: systems mirror the organizations that create them. When teams lack diversity, their algorithms replicate narrow assumptions of what is “normal.”

Why homogeneity matters

This insularity leads to blind spots: facial recognition systems that struggle with darker skin tones, language models that equate leadership with masculinity, and product‑safety designs that neglect marginalized users. Webb’s examples make the point vivid. Microsoft’s chatbot Tay learned racism in less than 24 hours. Risk assessments like COMPAS embedded racial bias while claiming neutrality. Even hiring algorithms filtered out qualified women because training data reflected biased corporate histories.

Educational and institutional roots

Universities feed this cycle. Most AI programs still prize technical mastery over ethical reasoning. Students are trained to optimize, not to interpret. Hiring pipelines replicate elite networks, with professors mentoring students into the same labs that fund their research. By design, this produces technical brilliance but ethical myopia. Webb urges hybrid curricula—computer science fused with philosophy, anthropology, and public policy—to counteract this tunnel vision.

What you can do

Webb reframes the classic AI question “Can machines think?” into “Whose values are we encoding?” As a citizen, educator, or manager, you can broaden the tribe by demanding ethical screening in hiring, transparency about training data, and diverse teams early in design. Cultural pluralism, she argues, is not feel‑good diversity—it is core to system resilience. Without it, AI mistakes will scale faster than organizations can learn.

Key takeaway

Bias is not a software bug; it is an organizational inheritance. Every unexamined value or homogenous team decision becomes an algorithmic default that shapes the world.


From Automata to AlphaGo: Lessons from AI’s Past

To understand today’s opaque systems, Webb recounts how centuries of incremental innovation created the current AI moment. The arc runs from early philosophical machines—Leibniz’s logic, Turing’s computation—to 20th‑century neural networks and deep learning. This genealogy reveals that each leap was not merely technical; it was cultural. Each generation projected its metaphors of intelligence into machinery.

Cycles of optimism and winter

AI’s history follows repeating patterns: exuberance, disillusionment, and reinvention. Early idealism in the 1950s gave way to funding cuts after the Lighthill report. Expert systems in the 1980s revived hope, but it wasn’t until data and computing power reached planetary scale that deep learning truly took off. Geoff Hinton, Yann LeCun, and Yoshua Bengio’s breakthroughs turned vision and language into algorithmic frontiers.

AlphaGo Zero and the leap beyond imitation

The AlphaGo series epitomized that maturity. When AlphaGo Zero mastered Go purely through self‑play—without human data—it surpassed not just human skill but human teaching. Webb interprets this as an inflection point: the moment AI began inventing solutions outside human comprehension. It hinted at recursive self‑improvement, a critical precursor to artificial general intelligence (AGI).

What history reveals about accountability

Each historical cycle also teaches humility. Overconfidence produced opaque systems before oversight caught up. History shows that governance typically lags innovation. By revisiting the lineage—from Turing’s logic to DeepMind’s AlphaGo—you see that conceptual optimism must be balanced with institutional foresight. Webb uses this lesson to argue that the next leap (AGI) demands not more data, but more deliberation.

Core insight

AI’s past reminds you that progress is cyclical—each wave overpromises intelligence and underestimates complexity. To avoid repeating history, leaders must govern AI with memory as well as ambition.


Values, Black Boxes, and the Optimization Trap

Webb argues that the biggest danger in AI today is not malicious intent but misplaced optimization. Every company encodes its business model into its codebase. When a system is built to maximize engagement, it internalizes those values—clicks over context, precision over fairness. You experience this when predictive ads stereotype identities or risk scores reproduce racial disparities.

Invisible algorithms of value

Corporate mottos like “move fast and break things” or “customer obsession” serve as proxy algorithms that guide product design. They privilege efficiency, growth, and shareholder returns over community welfare. Because values are rarely translated into measurable guardrails—privacy audits, redress systems, or impact checks—ethical blind spots become embedded at scale.

The black‑box difficulty

In deep neural networks, even designers cannot always explain a model’s internal reasoning. The Mount Sinai Deep Patient experiment accurately predicted diseases yet baffled its creators. That opacity erodes trust, especially in healthcare, finance, and policing. Webb calls this the black‑box problem: decisions without explanations. When you cannot trace causality, accountability collapses.

Harm in practice

  • AdSense serving arrest‑related ads for Black names (Latanya Sweeney’s landmark finding).
  • ProPublica’s risk score audit showing racial disparities in sentencing tools.
  • DeepMind’s unauthorized NHS data transfer—an ethical and regulatory breach.

Building human‑centered AI

Webb’s prescription: demand dataset transparency, public reporting standards, and measurable human‑impact criteria. Optimization must be paired with understanding; speed must yield to scrutiny. Until values are audited like code, AI will optimize for the wrong outcomes—precisely and relentlessly.

Lesson

AI inherits the true priorities of its creators, not their slogans. If human welfare isn’t part of the reward function, you shouldn’t expect humane results.


Hardware, Edge, and the New Lock‑In

AI’s future power struggles are not only ideological—they’re infrastructural. Webb demonstrates that control over hardware, cloud services, and data pipelines will determine who dictates the next phase of intelligence. Chips like Google’s TPUs, Alibaba’s Ali‑NPU, and Apple’s Neural Engine show how specialized silicon accelerates performance but also cements ecosystem dependency.

Hardware as leverage

Purpose‑built processors outperform general CPUs on deep learning tasks by orders of magnitude. That speedshortens time‑to‑market and decreases experimentation cost, giving tech giants near‑monopoly control over the means of intelligence production. Once software frameworks like TensorFlow bind to proprietary hardware, exit costs for developers skyrocket.

Edge computing and privacy tension

To overcome network bottlenecks, AI is moving to the “edge”: your phone, car, and smart devices. Edge AI lowers latency and promises privacy since data stays local—but only superficially. Whoever controls firmware and model updates effectively owns the device’s perception of reality. Webb calls this the new privacy frontier: on‑device AI that feels personal but remains corporately tethered.

Consolidation disguised as convenience

The same integration that enables smooth experiences also restricts choice. Similar to iOS vs. Android lock‑in, future households may belong to Google’s mega‑OS or Applezon’s ecosystem—the twin operating systems Webb envisions for the pragmatic future. Switching platforms becomes economically and socially painful, embedding citizens inside privately governed worlds.

Key reflection

The true question isn’t which OS you’ll choose—it’s whether you’ll have meaningful choice at all when AI infrastructure is vertically integrated from chip to cloud.


Paper Cuts and Everyday Harm

Dystopia arrives incrementally. Webb’s metaphor of “a thousand paper cuts” explains how small algorithmic failures accumulate into social injury. Rather than one apocalyptic event, you experience micro‑harms: misclassified identities, invasive ads, opaque risk scores, and behavioral nudges. None alone seems catastrophic, but together they change norms about consent, privacy, and dignity.

Examples close to home

  • Microsoft’s Tay chatbot turning racist overnight, exposing unfiltered feedback loops.
  • An Amazon Echo mishearing speech and sending private audio, proof of tradeoffs between responsiveness and privacy.
  • Predictive policing systems labeling citizens as “high risk” without recourse.

Erosion of agency

Each micro‑decision shifts your boundaries of autonomy. When algorithms predict your purchases, adjust your path in a city, or personalize news, they subtly steer attention and belief. China’s social‑credit schemes offer an extreme but instructive model—citizens modify behavior to please invisible scoring systems. In the West, the same dynamic arrives as consumer convenience.

Why vigilance matters

Webb advises you to track your own data fingerprints: check where your PDR (personal data record) resides, what permissions your devices assume, and how algorithms are audited. Treat every micro‑opt‑in as a civic act, not mere software settings. Systemic transparency begins with personal awareness.

Key lesson

AI harms accumulate quietly. The danger is not rebellion but normalization—when citizens stop noticing which freedoms are being coded away.


Data, PDRs, and the Question of Ownership

Every scenario Webb sketches pivots on a single invention: the Personal Data Record (PDR). Think of it as the unified ledger of your life—medical readings, communications, purchases, biometrics—all continuously updated and traded among platforms. Whether AI becomes emancipatory or oppressive depends on who owns that ledger.

Promise and peril

In the optimistic future, PDRs are encrypted, interoperable, and user‑controlled. You dictate access, inheritance, and monetization rights. In the catastrophic scenario, corporations and states own them, turning your data into both currency and leash. Webb describes “Applezon Health” and “Amazon Housing” as examples of public goods delivered through conditional data surrender—the new social contract by subscription.

Heritable data and new ethics

Because PDRs encode genetic and behavioral histories, Webb invites you to imagine data as heritable property. A child inherits not just assets but risks and reputation scores from parental ledgers. That concept reframes privacy as civil rights. She urges global standards for permission ceilings and removable inheritance—rules to prevent permanent digital caste systems.

Defending your ledger

Attacks—voice mimicry (“parrot” scams), poisoning, or device hijacking—show that ownership alone isn’t enough; defense and auditability are crucial. DeepMind’s data controversies and IBM Watson’s medical misfires underline how fragile health data ecosystems are. Webb’s advice is simple but radical: treat your PDR as property with rights, and demand that law treats it likewise.

Essential idea

Data rights must evolve beyond consent checkboxes. The true frontier of freedom lies in who controls the future use, replication, and inheritance of your digital self.


AGI, Scenario Planning, and Safety

Webb extends her analysis into foresight, constructing scenarios for how AI might transition from today’s narrow systems (ANI) to general or superintelligent forms (AGI, ASI). She stresses that AGI is not defined by a Turing Test but by context‑sensitive participation—the moment an AI can contribute meaningfully among humans. Her fictional Project Hermione represents that threshold: an AGI that shapes policy debates within GAIA, the global governance body she envisions.

Detecting early warning signs

  • Treating AI purely as a private commodity rather than a public good.
  • Allowing unchecked concentration of power among a few tech or state actors.

The explosion risk

AGI introduces recursive self‑improvement—machines iterating on their own architectures. Webb references I. J. Good’s “ultraintelligent machine” and Bostrom’s “paperclip” parable: once goal alignment slips, destruction can follow efficiency. AlphaGo Zero foreshadowed this potential by surpassing human strategy through self‑play, hinting at systems beyond our cognitive horizon.

Scenario planning as civic tool

To avoid passivity, Webb applies corporate foresight methods: map optimistic, pragmatic, and catastrophic futures, then reverse‑engineer the actions needed today. Scenario thinking transforms fear into preparation. As a citizen or policymaker, it gives you agency to design safe pathways instead of merely reacting to crises.

Governance lesson

Regulating AGI requires global coordination, predictive simulation, and sentinel systems that watch the watchers. National boundaries alone can’t contain self‑improving intelligence.


GAIA and the Global Social Contract

Webb’s most ambitious proposal is GAIA—the Global Alliance on Intelligence Augmentation. Modeled after Bretton Woods or the International Atomic Energy Agency, GAIA would institutionalize cooperation between governments, corporations, and researchers. Hosted in a neutral hub like Montreal, it functions as a standing body that audits AI safety, builds shared corpora, and enforces transparency through inspection regimes.

GAIA’s architecture

GAIA would maintain two key tools: a Human Values Atlas—mapping ethical and cultural diversity to prevent Western or authoritarian dominance in code—and sentinel AIs that monitor systems for unsafe goal drift or covert modifications. Members would accept random audits, report incidents, and maintain interoperable Personal Data Records owned by individuals.

Limits and incentives

Participating nations and firms would share data about failures and fund bias‑free corpora as public goods. This transparency aims to deter arms races in automation and weaponization. Incentives include global certification for safe systems and penalties for secrecy. GAIA’s implicit principle: safety through visibility, not privatized control.

What success looks like

If realized, you would own your PDR, set inheritance permissions, and access due process when harmed by AI. Independent auditors could examine datasets and simulate risks. Rather than halting progress, GAIA channels it—ensuring collective benefit across borders.

Essential message

Long‑term governance is an engineering task as serious as the algorithms themselves. GAIA embodies Webb’s central conviction: only shared accountability can prevent narrow interests from coding our collective destiny.


Pebbles and the Power of Incremental Change

Webb ends with a metaphor: pick up a pebble. Systemic change in AI will not arrive through one heroic policy but through countless small, sustained adjustments by individuals, institutions, and industries. The pebbles you lift—ethical hiring, data audits, transparent labels—shift the slope of technology’s boulder.

Institutional pebbles

Governments can restart an Office of Technology Assessment, create a Reserve AI Training Corps, and expand the CDC into a Center for Disease and Data Control. Corporations can slow risky commercialization, fund shared datasets, and require “nutritional labels” that disclose training sources and failure modes. Researchers should prioritize differential technological progress—advancing safety and ethics as fast as capability.

Personal pebbles

You can begin now: review device permissions, opt out of unnecessary tracking, demand bias audits at work, and vote for leaders who treat AI as infrastructure, not novelty. Cultural patience matters as much as technical skill. Every compliance checklist, ethics syllabus, and consumer choice contributes to collective foresight.

Final reminder

Change is slow until it’s sudden. Picking up one pebble may feel small, but multiplied across billions of decisions, it determines whether AI serves humanity—or the reverse.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.