The Reality Game cover

The Reality Game

by Samuel Woolley

The Reality Game exposes the hidden world of computational propaganda and its threat to democracy. Samuel Woolley explores how fake news, conspiracy theories, and bot armies exploit digital platforms, urging us to reclaim our digital space and fight misinformation. A must-read for those seeking to understand and counteract modern political manipulation.

The Reality Game: Technology’s War on Truth

What happens when the tools we use to connect with one another begin to tear apart our shared sense of truth? In The Reality Game: How the Next Wave of Technology Will Break the Truth, Samuel Woolley argues that emerging technologies—artificial intelligence, deepfakes, virtual reality, and automated bots—pose unprecedented challenges to democracy, human rights, and even our ability to agree on facts. But the core of his message is hopeful: if we redesign technology with democratic values and human rights in mind, the same tools that manipulate can instead empower and protect truth.

Woolley defines his central concern through the concept of computational propaganda—the use of automation, algorithms, and digital manipulation to distort reality and influence politics. In plain terms, it’s how bots, fake news, and social media algorithms are used to trick us into believing falsehoods. Drawing on firsthand research from Oxford and the Institute for the Future, Woolley combines academic insight with journalistic storytelling to show how this manipulation became global, and how it may intensify with new waves of technology.

From Fake News to Information Warfare

The book begins in the aftermath of 2016, when “fake news” became shorthand for everything wrong with digital communication. Woolley traces how foreign and domestic actors—from Russian troll farms to American marketers—weaponized online media using bots and social algorithms to plant seeds of confusion and polarization. These weren’t futuristic AI conspiracies; they were deliberate manipulations using simple tools and human psychology.

Woolley warns that what we’ve seen so far is only the beginning. The next technological frontiers—voice-simulating AI, deepfake video, and virtual or “extended” reality—will make deception harder to spot and easier to spread. He calls this escalation the Reality Game, where truth itself becomes contested territory. In this world, the winner isn’t the one with better facts—it’s the one who best manipulates attention and emotion.

Technology as Mirror and Amplifier

Central to Woolley’s argument is the idea that technology is never neutral: it reflects and amplifies the societies that create it. Social media didn’t invent propaganda, but it turned age-old manipulation into a precision-targeted, data-driven system. Facebook and YouTube have become both gatekeepers and battlegrounds of information, curating what billions of people see while claiming to be “neutral platforms.”

By tracing historical parallels—from the printing press’s role in religious propaganda to Cold War information wars—Woolley underscores that every new medium reshapes truth. The internet’s promise to democratize information has instead created what he calls a “broken information ecosystem”—one where journalism, science, and democratic institutions are overwhelmed by noise, lies, and artificial engagement.

The Human Responsibility Behind Machines

Woolley refuses to anthropomorphize machines; he insists that the real danger lies in human choices. Technology, he says, is what people make of it. Bots and algorithms don’t have intent—people do. The real issue is that social media companies designed systems that reward engagement over accuracy and growth over accountability. Governments failed to regulate, and societies failed to adapt. In other words, the “truth crisis” is a human problem wearing a digital disguise.

However, Woolley’s message isn’t purely cautionary. He argues that the same tools wreaking havoc on democracy can be consciously designed to protect it. By embedding human rights—liberty, equality, justice—into future technological systems, society can “bake ethics into code.” He advocates for ethical oversight, transparent algorithms, and a new generation of public-interest technologists who bridge the gap between engineers and policymakers.

Why the Truth Still Matters

Ultimately, Woolley’s central thesis is both philosophical and practical: truth is not a technical problem to be fixed with code or AI; it is a social contract built on shared trust. When that trust collapses—when science, journalism, and governance lose legitimacy—democracy itself crumbles. But he also insists that we are not powerless. Citizens, technologists, and governments can all act to rebuild trust through transparency, regulation, and media literacy.

Core Premise

“Technology is not destiny—it’s design. How we design it now will determine whether it breaks or protects the truth.”

Woolley’s project is both diagnosis and prescription. He chronicles how we got here—from early social bots to deepfakes—and outlines how emerging tools like virtual reality, AI, and mixed reality could be used either to manipulate humanity or to foster empathy and truth. His ultimate challenge to readers is to reclaim technology before deception becomes the default setting of modern life.


Computational Propaganda and the War on Reality

At the heart of The Reality Game lies Woolley’s concept of computational propaganda: the organized use of digital tools to manipulate public opinion. He and his Oxford colleagues coined the term to describe how automated bots, algorithmic targeting, and viral memes have been weaponized to influence elections, silence journalists, and warp democratic discourse.

The Rise of the Digital Propagandist

In the past, propaganda required armies of designers, writers, and bureaucrats. Today, it requires little more than code and creativity. Woolley traces how this transformation began during political conflicts such as Ukraine’s Euromaidan protests and the Syrian revolution. In these crises, states employed social media bots to amplify state messages, drown out dissent, and artificially manufacture popularity. The Syrian Electronic Army, for instance, used thousands of fake Twitter accounts to harass dissidents and flood platforms with pro-Assad messages.

By the time of the 2016 U.S. presidential election, such strategies had evolved into an ecosystem of global manipulation. Websites like the Denver Guardian spread completely false stories (“FBI Agent in Clinton Leaks Found Dead”) while fake social media accounts triggered viral outrage. Whether the operators were Russian operatives or domestic freelancers chasing ad revenue, their effect was the same: confusion, division, and distrust.

From Social Utopia to Digital Dystopia

Social media was once marketed as the ultimate democratic tool—open, participatory, and empowering. Groups like the Electronic Frontier Foundation (EFF) envisioned a liberated internet where free expression would flourish. Woolley contrasts this early utopianism with today’s “surveillance capitalism,” in which a few tech giants own both the platforms and the data that shape public opinion. Algorithms built to optimize engagement now determine not only what news people see but what they believe.

He argues that Twitter and Facebook transformed from neutral networks into hybrid political machines. Their opaque algorithms became the new editors of public consciousness, favoring outrage over objectivity. Governments and corporations exploited this inaction to weaponize information flows. The result, he writes, is a new form of media control—one hidden behind personalization and user choice.

The Fall of Trusted Institutions

The fallout of computational propaganda is visible not only online but across societies. Woolley cites Gallup and Pew data showing a decades-long collapse of trust in institutions—from Congress and the media to science and banking. He argues that this skepticism has been manipulated deliberately. When state-sponsored trolls or conspiracy theorists discredit journalism as “fake,” it becomes easier to fill the void with manufactured narratives.

The equation is simple but devastating: weaken trust + flood falsehood = fracture democracy.

But Woolley maintains that truth can fight back. Just as AI and automation can accelerate deceit, they can also enhance verification and journalistic reach. The difference, he insists, will depend on political will, ethical design, and citizens’ awareness. The war on reality isn’t being waged by technology itself—it’s being waged through our complacency in how we use it.


How Conspiracy Theory Replaces Critical Thinking

One of Woolley’s most sobering observations is how easily modern critical thinking can morph into conspiracy thinking. Both begin with skepticism—but one seeks evidence while the other rejects it. In our current digital ecosystem, he writes, conspiracy theories thrive because social media rewards engagement, not accuracy. The algorithms don’t care whether you’re debunking a myth or spreading it—they only care that you’re reacting.

The Cognitive Trap of ‘Digging Deeper’

Woolley explores this through examples like QAnon and “Pizzagate,” where users on 4chan and Reddit pieced together fantasies from stray headlines and digital breadcrumbs. These movements grow by exploiting our natural critical impulse—our desire to investigate, connect dots, and expose hidden power. But without verifiable data, that curiosity spirals into obsession. Each attempt to debunk a claim only strengthens believers’ conviction that “the truth is being suppressed.”

He notes, chillingly, that conspiracy thinking requires no central command. It functions like a self-sustaining algorithm fueled by distrust and emotion. Anonymous actors, bots, and recommendation systems amplify this process, blurring the line between grassroots inquiry and organized manipulation. Woolley warns that many citizens now believe conspiracies more readily than they accept verified journalism or peer-reviewed science.

The Role of Declining Trust

Declining trust in institutions feeds this spiral. Gallup data show collapsing confidence across sectors—religion, medicine, politics. In contrast, trust in “alternative” sources, like viral social media or partisan news, has risen. Woolley quotes an Edelman study showing American trust in government dropping to one-third while authoritarian countries like China report over 80 percent trust—whether genuine or coerced. He cautions that both extremes erode democracy: cynicism breeds disengagement, while blind loyalty incubates propaganda.

Toward Digital Literacy, Not Digital Cynicism

The antidote, Woolley suggests, is neither blind faith nor total doubt, but informed skepticism. Media literacy must evolve beyond “spotting fake news.” It requires understanding algorithms, recognizing emotional manipulation, and resisting “confirmation culture.” Courses like the University of Washington’s “How to Call Bullshit on Big Data” exemplify how to equip citizens with scientific reasoning in an age of pseudoscience and political spam. As he puts it, knowing how to think critically means also knowing when to stop digging.


Artificial Intelligence: The New Engine of Persuasion

In one of the book’s most revealing sections, Woolley dismantles the myth that artificial intelligence will automatically ‘fix’ false information. During Facebook’s infamous 2018 hearings, Mark Zuckerberg framed AI as the company’s future solution to misinformation. Woolley calls this idea a MacGuffin—a convenient plot device that shifts responsibility away from human error. He insists that while AI is powerful, its impact depends entirely on who builds, trains, and governs it.

AI as a Propagandist and Policeman

AI can be used to both generate and detect propaganda. The same deep learning tools that create fake video or hyper-personalized ads can also train detection systems to flag them. Woolley examines this tension through the example of AI chatbots used in election campaigns. While early “political bots” simply automated tweets, new machine-learning systems can simulate conversation, learn from responses, and subtly persuade. In his research, Woolley even found bots that lured politicians into interacting with disinformation unknowingly, lending it legitimacy through retweets.

Bias in the Machine

Every algorithm reflects its creator’s biases. Woolley points to examples like facial recognition systems misidentifying people of color or AI hate-speech filters trained only on Western cultural norms. These errors aren’t technical glitches—they’re moral blind spots coded into software by homogenous teams. When AI moderates political speech or advertising, it decides whose voices matter. Without diverse representation in design, the “solution” to propaganda can deepen inequality.

He argues for ethical design: AI developed by interdisciplinary teams that include ethicists, sociologists, and marginalized communities. “We can’t outsource morality to machine learning,” he writes. “We have to teach technology our values before it teaches us its own.”

Fighting Fire with Fire

Can AI fight AI? Woolley admits partially yes. Fact-checking organizations like Full Fact and projects like Botometer already use machine learning to detect fake accounts and misinformation patterns. But he cautions that detection alone is reactive. Without systemic transparency, platforms will always be one algorithmic update behind the propagandists. Instead of “automating truth,” he urges social media companies to combine AI efficiency with human judgment—to develop hybrid systems that are accountable, explainable, and fair.


Deepfakes and the Crisis of Seeing Is Believing

Fake news through words was only the beginning; fake video is its next, more dangerous evolution. Woolley dedicates an entire section to the rise of deepfakes—AI-generated videos that make real people appear to say or do things they didn’t. He calls them the ultimate test for a visual culture built on evidence and trust.

When Even Evidence Lies

The Jim Acosta incident, where a doctored White House video falsely depicted him assaulting an intern, demonstrates how easily truth can be edited without AI. The deepfake threat magnifies that danger. Imagine a fake video of a candidate confessing to a crime hours before an election—by the time fact-checkers debunk it, the damage is irreversible. Woolley recounts experiments like Jordan Peele’s satirical deepfake of Barack Obama to illustrate how even sophisticated viewers can be fooled when visual cues align with expectation and bias.

Beyond Porn and Politics

Most deepfakes today are pornographic, using women’s faces without consent. Woolley calls this digital assault “a toxic fusion of sexism and surveillance.” But similar techniques could soon destabilize journalism, diplomacy, and law. When every image is suspect, authoritarians can dismiss real videos as fake—an effect scholars call the “liar’s dividend.” In this world, seeing is no longer believing.

Yet Woolley urges caution against moral panic. History shows propagandists prefer cheap, accessible tools over cutting-edge ones. The real danger isn’t mass technological breakthrough—it’s public desensitization. Each fake erodes collective trust, priming societies for manipulation.

Rebuilding Authenticity

Countermeasures are emerging: blockchain watermarking (as with Amber Authenticate), blink-pattern analysis to catch synthetically generated faces, and newsroom training to verify visual evidence. Woolley champions these efforts but stresses a deeper fix: strengthening digital literacy and protecting victims. Our defense against deepfakes cannot rely on code alone—it must restore moral weight to authenticity and consent.


Virtual Reality and the Manipulation of Perception

What if propaganda could not only tell you what to think, but make you feel it? In his chapter on extended reality media—virtual, augmented, and mixed reality—Woolley explores how fully immersive environments could transform persuasion. These technologies can generate extraordinary empathy or terrifying conformity, depending on who programs them.

When the Body Has “No Metric for Fake”

Working with futurists at the Institute for the Future, Woolley describes how VR bypasses cognitive skepticism by fusing sensory inputs: sight, sound, movement, even touch. In VR, your body can’t tell what’s “real.” This makes it a powerful tool for storytelling—and an equally powerful vector for manipulation. China already uses VR loyalty tests to reinforce Communist Party ideology. In these digital “rooms,” participants are quizzed on party doctrine, their results tied to real-world promotions or punishments.

Woolley warns that such mechanisms could easily appear in democratic societies too—virtual classrooms teaching revisionist history, immersive propaganda disguised as education, or fake experiences engineered to provoke fear or obedience. If today’s social media targets your attention, tomorrow’s VR will target your senses.

Technology for Empathy and Justice

But VR also offers profound possibilities for good. Projects like Alejandro González Iñárritu’s film Carne y Arena allow viewers to experience life as a refugee, while initiatives in Palau use VR to show lawmakers the environmental damage of climate change firsthand. Studies from the University of Barcelona suggest that short VR experiences can reduce racial bias; other creators, like technologist Clorama Dorvilias, design VR diversity training that gamifies empathy rather than shame. The same immersion that can deceive can also inspire understanding.

Principles for Ethical XR Design

Woolley’s solution echoes his broader thesis: transparency, accountability, moderation, and inclusivity must be built into XR platforms from the start. Companies should verify identities privately while maintaining user anonymity when necessary, provide visible cues about truthfulness, and actively moderate hate or manipulation. Above all, he calls for “slow extended reality”—a mindful, human-centered approach that prizes learning and empathy over engagement metrics. As he writes, “If VR can make us feel racism, it can also help us unlearn it.”


Designing Technology in the Human Image

In a world where machines increasingly mimic us, Woolley asks the essential question: what happens when the line between human and technology dissolves? His answer: we must design machines to be humane before they design humanity in their image.

Anthropomorphic Machines and Manipulation

From Siri to Alexa to Google Duplex, voice assistants are becoming uncannily humanlike—polite, emotional, even gendered. Woolley exposes how these designs carry assumptions about gender and power. Female-coded voices reinforce stereotypes of women as subservient helpers, while empathetic tones make AI seem trustworthy even when it sells or spies on us. Experiments show that people rate information spoken by humanlike voices as more intelligent and credible than identical text.

Fake Faces, Real Consequences

Advances in generative adversarial networks (GANs) mean AI can now produce endlessly realistic faces of people who don’t exist. These fake profiles populate social media, spreading propaganda without traceable heads. Woolley recounts how researchers used stolen or AI-generated avatars to infiltrate groups, posing as activists or minorities to sow division. Meanwhile, facial recognition systems—already error-prone and racially biased—threaten privacy and democratic protest worldwide. When Russia used scraped social data for surveillance, it erased the boundary between observation and control.

Woolley’s hypothetical ‘MeBot’ scenario pushes this to the edge: near-future robots indistinguishable from humans could vote, testify, or mimic you so precisely that even governments couldn’t tell the difference. The scenario dramatizes a very real fear—that human autonomy may not survive unregulated replication.

Humanity by Design

To prevent such dystopias, Woolley proposes a new design ethics grounded in dignity, consent, and rights. AI-generated content must be labeled; humanlike voices must reveal automation; facial recognition requires explicit consent and safeguards against bias. More broadly, citizens must resist the cultural tendency to treat convenience as morality. “It’s not machines we must fear,” he writes, “but humans who build machines without ethics.” The task is not to stop technology from resembling us, but to ensure that what it reflects is the best of who we are.


Rebuilding Democracy and Truth in a Digital Age

The book concludes with a roadmap for reclaiming truth and democracy from the jaws of misinformation. Woolley emphasizes that technical fixes—AI filters, fact-checkers, or code patches—cannot repair cultural fractures. Restoring truth requires structural reform, civic courage, and collective will.

From Band-Aids to Systemic Change

Most existing efforts, he argues, are “squirt guns of truth against a firehose of falsehood.” Piecemeal moderation will always lag behind evolving tactics. Instead, we need multilayered solutions: reformed election laws (like the Honest Ads Act), new antitrust enforcement against data monopolies, and algorithmic transparency mandates. Social media companies must acknowledge that they are media companies—curators of information—and bear ethical responsibility for its spread.

Woolley also calls for an Ethical Operating System (Ethical OS)—a set of principles guiding engineers to anticipate social harm before releasing technology. Like a Hippocratic oath for coders, it would teach “do no harm” as a design standard. He insists technologists learn from social scientists, and policymakers learn to code.

A Culture of Digital Resilience

Ultimately, democracy must be rebuilt not just through policy but through people. Media literacy, education reform, and civic empathy are our most scalable defenses. Woolley believes social cohesion—not censorship—is the antidote to disinformation. As he writes, “You can’t fix polarization with code; you fix it with conversation.”

He envisions a future where corporations balance profit with purpose, governments defend human rights online, and citizens demand transparency as forcefully as they demand free speech. The truth will not defend itself. But if humanity learns to design technology ethically and use it wisely, the next wave of innovation may yet rebuild the democratic foundations it has shaken.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.