A History of Fake Things on the Internet cover

A History of Fake Things on the Internet

by Walter Scheirer

A History of Fake Things on the Internet delves into the evolution of digital deception, from early photo manipulations to today''s deepfakes. It examines how these fabrications reflect human creativity and destructiveness, illuminating our digital landscape.

The Internet as Humanity’s New Myth Machine

Why do fake things online feel so real? Walter Scheirer’s central claim is that the Internet isn’t just a communications network—it’s a myth-making machine. Across history, humans used myths to explain contradictions and inspire collective imagination. Today, that same cognitive instinct expresses itself through memes, viral hoaxes, and algorithmically amplified stories. The web is a continuation of the ancient human drive to create shared narratives—it just operates at digital speed and planetary scale.

From Ancient Myths to Memes

Scheirer borrows from Claude Lévi-Strauss, who argued that myths aren’t irrational—they deploy a logic similar to science but directed toward symbolic contradictions. On the Internet, this same logic animates memes. As in Greek pottery, you find familiar stock figures—Doge, Wojak, Pepe—repurposed to convey moral lessons, frustrations, and satire. Visuals move faster than text, which makes images the modern vessel for mythic storytelling. Each meme acts as a miniature myth module, remixable and accessible to everyone with Wi-Fi.

This is why fake news, playful pranks, and participatory fabrications thrive. They make people co-authors of a collective imagination. Shares and likes function as the new oral repetition, spreading modern folklore about politics, identity, and belonging much like Homeric bards once did.

Playful Fakery and Participatory Pranks

Scheirer reminds you that early Internet fakery often grew from humor rather than hostility. Kembrew McLeod’s formula—performance art + satire × media—describes what hackers and troll collectives did long before the concept of “disinformation.” Sites like Something Awful or 4chan evolved from prankish creativity into laboratories for narrative experimentation. These communities, much like mythic tribes, operated by remixing symbols and jokes to explore who they were as a group.

That playfulness, however, created downstream consequences: journalists, state agencies, and ordinary users sometimes mistook these symbolic performances for literal claims. As a result, modern disinformation uses mythic energy but weaponizes it—turning the same creative process into manipulation.

Trust, Imagination, and Parallel Realities

When myth and fact circulate side by side, trust reorganizes. The Internet collapses distance between “my story” and “the story.” You inhabit overlapping timelines—your social feed’s collective imagination and the historical record. Scheirer warns that blanket cynicism is dangerous: fictions can be generative, building communities and art. The challenge is distinguishing play from deceit, symbolic creativity from destructive propaganda.

Where Technology Meets Anthropology

The book’s throughline merges anthropology, media theory, and computer science. By treating memes and myths as data structures, Scheirer shows that imagination itself now runs on cloud infrastructure. This view reframes fake news as a cultural phenomenon rather than simply a technical one. Myths never vanished; they just found server space.

Core idea

The Internet did not invent myth—it industrialized it. What you witness as viral content is ancient mechanism in new form: stories that solve contradictions, build belonging, and reshape what you treat as truth.

Understanding the Internet as a myth engine allows you to see beyond surface-level panic about “fakes.” It invites a deeper question: how might humanity design media ecosystems that honor imagination without surrendering truth entirely?


Hackers, Hoaxes, and Early Digital Mythmaking

Long before algorithmic misinformation, hackers experimented with creative deception as cultural performance. In Scheirer’s account, hacker communities created a modern mythology about power, secrecy, and rebellion. Episodes like the Dateline NBC 'Quentin' hoax reveal how pranksters shaped public myth about hacking while manipulating mass media itself.

The Quentin Hoax and Media Puppetry

During the early 1990s, Dateline NBC aired an interview with an anonymous hacker nicknamed Quentin alleging access to secret military UFO archives. In reality, the claim was staged—a prank constructed by members of the Cult of the Dead Cow and the zine Phrack. They slipped fabricated details (Projects ALF-1, Green Cheese) into underground networks to ensure the lie circulated back toward mainstream press. The aim wasn’t malice but media critique: hackers proved they could feed journalists what they wanted—spectacle—and expose how narrative desire trumped skepticism.

Underground Narratives as Identity Work

Handles like Erik Bloodaxe, The Urvile, and Doc Holiday emerged as legendary trickster figures. These weren’t just aliases—they were mythic archetypes around which communities coalesced. Textfiles, BBSs, and conferences such as HoHoCon acted as oral epic equivalents, with exploits and hoaxes replacing heroic battles. Each file blended technical hacks and tall tales, forming a lore that trained newcomers how to think, doubt, and play with authority. (Note: Jason Scott’s textfile archives later captured this history as digital folklore.)

From Anarchy to Industry

Over time, the same authors of hoaxes became engineers and security professionals. L0pht Heavy Industries evolved from underground experimenting to congressional testimony. This migration from prank to policy marked the domestication of hacker myth. But cultural friction remained—blackhat purists accused the new professionals of selling out, while whitehats accused the underground of irresponsibility. The resulting zine wars—parodies, false advisories, and reputation sabotage by groups such as Gobbles and pHC—illustrate how fakery can serve as moral argument inside subcultures.

Lessons for Today

Scheirer argues that the Quentin hoax predicted modern disinformation operations. The hackers’ manipulation of journalistic appetite resembles today’s clickbait cycles. Their mythic archetypes—outsider heroes exposing corruption—still populate film, news, and cybersecurity marketing. Understanding these roots helps you see how credibility itself became a stage for performance, not just accuracy.

Key takeaway

Digital deception began as cultural play—a means of mythmaking and critique. Only later did it mutate into the weaponized misinformation you face now.

Recognize this lineage, and you’ll better understand that today’s information warfare evolved not from sudden malice but from decades of experimental storytelling, subcultural humor, and a human hunger to turn technology into theater.


Photoshop, Proof, and the Visual Revolution

You live in an image-saturated world where seeing is no longer believing. Scheirer charts the evolution from nineteenth-century darkroom tricks to ubiquitous Photoshop filters, showing how image manipulation changed both art and evidence. The story illustrates humanity’s ambivalence toward visual truth—half creative, half deceptive.

Before Digital: Darkroom Fictions

Hippolyte Bayard’s 1840 'Self Portrait as a Drowned Man' exemplified performative fakery—using photography to tell a fictional story with documentary style. Later, Communist regimes retouched photos to erase disgraced officials, and newsrooms adjusted compositions for clarity or propaganda. Image editing has always been both aesthetic and political, signaling power even when visible.

Photoshop Democratizes Manipulation

John and Thomas Knoll’s Photoshop transformed editing from a darkroom craft to a consumer ritual. The software’s very success—the Bora Bora Jennifer demo, the verb 'to Photoshop'—made visual mythmaking accessible. What had been skilled deception became play: Photoshop Phriday on Something Awful invited parody and communal remix. Political hoaxes soon followed (e.g., Kerry–Fonda composite). The lines between joke, satire, and propaganda blurred.

Smartphones and the Algorithmic Camera

Today’s phone cameras no longer just capture reality—they compute it. Machine vision pipelines smooth skin, adjust skies, and recompose scenes before you even see them. In effect, reality becomes an editable suggestion. Trust in images erodes because every shot might be algorithmically restyled before sharing. (Note: Scheirer contrasts this to analog eras when manipulation required intent; now, it’s automatic.)

Practical lesson

Instead of banning tools, cultivate norms. Creativity flourishes when intent is honest; deception thrives when consumers forget context. Media literacy and disclosure beat censorship.

Scheirer’s point is subtle: visual manipulation is not a modern corruption but an ancient tradition revived with new ease. The difference lies in scale and speed—anyone can mythologize with a tap. Your task is to keep artful fiction distinct from fraudulent fact.


Forensic Science and the Fragility of Proof

How do you verify reality once pixels can lie? Scheirer reconstructs the rise of media forensics—a field combining signal processing, privacy law, and moral urgency. Its origin story winds through copyright battles, criminal investigations, and academic invention, revealing both power and peril.

Watermarks, Fingerprints, and Validation

Forensics began as intellectual property enforcement. Geoffrey Rhoads’s Digimarc system embedded invisible watermarks to prove ownership of Playboy images; IBM’s Minerva Yeung and Fred Mintzer refined methods that survived recompression. Later, Jessica Fridrich’s team discovered photo-response nonuniformity (PRNU)—the fingerprint inside every camera sensor. These innovations promised to authenticate origin and edit history, defining a discipline obsessed with proving authenticity by computation.

Law Shapes Technology

A turning point came in 2002 with Ashcroft v. Free Speech Coalition. Because simulated child pornography could be legally protected, prosecutors had to prove an image depicted a real victim. Cases like attorney Dean Boland’s morphing scandal accelerated development of authenticity verification. Investigators like Jim Cole reframed the question from 'Is the image edited?' to 'Where is this child?'—linking forensics back to social fact. Abby Stylianou’s scene-recognition algorithms now match photos to hotel databases, illustrating the blend of technical and human reasoning required for justice.

Promises and Limits of Detection

Researchers like Hany Farid and Nasir Memon built statistical detectors to catch tampering, yet Scheirer warns that early models tested synthetic fakes they themselves made—risking circular validation. Media alarm sometimes exceeded actual threat, echoing Michael Crichton’s fictional panic about undetectable fabrications. As deepfakes appeared, research exploded, but efficacy stayed partial—algorithms struggle across formats and can produce false positives that punish honest artists.

Key point

Forensics can reveal traces of manipulation, but truth needs context. Technical evidence must pair with motive, narrative, and human investigation.

Scheirer doesn’t abandon forensics; he situates it as one tool in a larger cultural system. Proof is now probabilistic, not absolute. To trust an image, you must evaluate not just pixels but people.


Shock, Violence, and the Attention Economy

The Internet’s appetite for shock evolved from carnival spectacle to omnipresent business model. Scheirer traces this thread through rotten.com, Videodrome, and modern social platforms, showing that disgust and fascination operate as levers of engagement. The moral question isn’t just what shocks us—but who profits when we can’t look away.

From Freak Shows to Rotten.com

Tom Dell’s rotten.com stitched hacker humor, freak‑show marketing, and BBS aesthetics into an online horror carnival. Photos of death scenes, hoaxes like the fake Princess Diana autopsy, and reader commentary meshed outrage with participation. The site’s traffic jumped precisely when its truth collapsed—demonstrating that attention, not authenticity, drives virality. (Parenthetical note: Dell’s “Daily Rotten” foreshadowed today’s unfiltered image boards like 4chan.)

Why Shock Works

Marshall McLuhan’s dictum—'the medium is the message'—and David Cronenberg’s Videodrome inform Scheirer’s reading: certain media bypass rationality and strike the body. Shock provokes, monetizes, and desensitizes. Repetition of violent imagery, from pornography to livestreamed atrocities, rewires empathy responses. Platforms learn this empirically: outrage elongates scrolling and boosts ad impressions.

Harm versus Meaning

Scheirer doesn’t condemn all shock. He distinguishes between toxic spectacle—content that eroticizes or normalizes cruelty—and critical shock that forces moral reflection, such as documentary footage of abuses. The same visceral potency that corrupts can awaken. The key variable is intent: whether the image calls for action or merely seduces attention.

Essential reflection

Ask of any shocking piece: Who made it? Why? Who benefits? Genuine critique demands context, not censorship.

Rotten.com faded, yet its logic colonized social feeds: outrage sustains profitability. Understanding this helps you choose when to disengage—and when to use shock deliberately for empathy and reform.


AI, Scale, and the Reinvention of Reality

By the 2020s, cloud platforms, smartphones, and artificial intelligence combined to industrialize imagination itself. Scheirer calls this the restyling of reality—machines now fabricate plausible worlds faster than humans can verify them. From deepfakes to climate 'clairvoyant' GANs, technology blurs simulation and foresight while society struggles to adapt.

Scale Changes Everything

Billions of devices stream terabytes each minute, turning memes into planetary folklore. Social networks amplify both humor and harm algorithmically. Scale ensures that even minor manipulations can cascade globally, transforming private fantasies into collective beliefs. Datasets themselves become cultural mirrors—training AIs on patterns of past imagination reproduces inherited myths at a computational rate.

Generative Systems and Predictive Fantasies

Scheirer’s example of Yoshua Bengio’s climate GAN shows how machine learning translates data correlations into immersive fictions. A model can map today’s street onto an imagined flooded future, evoking emotion but not evidence. Such images mobilize empathy but risk misleading viewers into false certainty. (Note: The site thisclimatedoesnotexist.com embodied that paradox—powerful illustration, weak prediction.)

The Metaverse and Generative Culture

Modern generative art and metaverse avatars extend this synthesis: StyleGAN, GPT-3, and virtual-world design fuse entertainment, art, and simulation. Scheirer connects this to Bruno Maçães’s “virtualism”—a civilization embracing self-created realities. Shannon Vallor’s virtue ethics appears here as corrective: technology must cultivate responsible creativity, not just awe. Design platforms that encourage expression yet acknowledge truth boundaries.

Guiding principle

Use generative power to expand imagination, not replace perception. Machines can dream, but humans must interpret.

Scheirer closes with pragmatic optimism: pair technical literacy with moral literacy. The mythic Internet won’t disappear—but you can choose whether its stories enlighten or engulf you.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.