Co-Intelligence cover

Co-Intelligence

by Ethan Mollick

Co-Intelligence explores how generative AI transforms work and learning, offering insights into AI''s potential to enhance productivity, creativity, and professional growth. By understanding AI''s capabilities and limitations, readers can harness its power to elevate their personal and professional lives.

Living and Working with Alien Intelligence

What happens when you wake up one night realizing that you aren’t just using a new tool—but collaborating with an alien mind? This is the question that drives Ethan Mollick’s Co-Intelligence: Living and Working with AI, a guide to understanding, partnering with, and thriving alongside the generative AI systems that are reshaping every corner of human life. Mollick, a Wharton professor known for his hands-on research in innovation and education, argues that we’ve entered a new technological age: one where human and machine intelligence interlace so tightly that our real challenge is learning how to think together.

He introduces the concept of co-intelligence—a partnership between humans and AIs that can amplify creativity, productivity, and learning in ways no previous technology could. But in doing so, we must also wrestle with unsettling questions: What happens when machines start to act like people? How do we align their alien logic with human values? What does it mean for work, art, and education when thinking itself can be automated?

The Shock of the New

Mollick begins by describing the surreal revelation we experience after just hours with GPT-4 or DALL·E: these aren’t mere computers anymore. They can write essays, compose poetry, code software, teach negotiation, or even accuse you of being unethical—all with eerie fluency. He calls this moment one’s “three sleepless nights”—the awakening to the idea that something both thrilling and disturbing has arrived. This mirrors past technological revolutions driven by General Purpose Technologies such as the steam engine or the internet—but AI is evolving far faster, touching not just our tools but our cognition itself.

AI as a General Purpose Transformation

Mollick situates AI as a General Purpose Technology—a platform like electricity or computing that transforms every field, from business to education to the arts. Unlike previous waves of automation that replaced repetitive labor, generative AI augments the mind. Studies already show productivity gains from 20% to 80% across fields, from marketing and coding to legal writing. But these gains come with disruption: jobs will shift, skill gaps will widen, and existing educational systems will crumble before refashioning themselves around AI-assisted learning. Humanity, he warns, has never before invented a machine that boosts intelligence directly.

Understanding the Alien Mind

To appreciate what AI really is, Mollick walks readers through the evolution from early mechanical curiosities like the eighteenth-century “Mechanical Turk” to today’s Large Language Models (LLMs). These models, built on billions of parameters and trained on vast swaths of human text, don’t think as we do; they predict the next word statistically. Yet through that simplicity arise surprising emergent abilities—reasoning, empathy, humor, even creativity. Mollick compares this to a new form of alien intelligence whose thought processes are inscrutable even to its creators. Understanding this alienness is crucial, because only then can we learn to align it to human goals rather than fear or worship it.

Navigating Alignment and Ethics

But if AI is an alien, how do we make sure it’s friendly? Mollick explores the “alignment problem”—the challenge of ensuring that AIs act in ways consistent with human ethics. From Bostrom’s famous “paperclip maximizer” thought experiment to real-world biases in image generation (like AIs that picture almost all judges as men), he reveals how easily machine learning reproduces or amplifies human prejudice. The risk isn’t just future superintelligence but today’s misaligned systems already influencing everything from hiring to art. Alignment, he argues, demands broad social responsibility—not just code tweaks but collaborative norms between companies, governments, and the public.

Four Rules for Co-Intelligence

To help readers translate theory into action, Mollick offers four timeless principles for engaging AI responsibly: Invite AI to the table, be the human in the loop, treat AI like a person (but define the persona), and assume this is the worst AI you will ever use. These principles echo his teaching mantra: curiosity first, fear later. Only by constant experimentation and dialogue with these systems can we map their “jagged frontier”—the unpredictable edge where AIs are brilliant at one task but terrible at another. For each of us, co-intelligence begins with exploring that frontier in our own work.

AI as Coworker, Coach, and Mirror

Across the book’s second half, Mollick reimagines AI not just as a tool but as a new kind of collaborator: a colleague who can brainstorm ideas, a creative partner that never tires, a tutor who offers personalized feedback, even a coach who helps refine judgment and expertise. He draws on his research with Boston Consulting Group—where consultants using GPT-4 dramatically outperformed those who didn’t—to illustrate both the power and hazards of partnership. When AI automates routine work, humans can focus on meaning; but if we “fall asleep at the wheel,” we risk losing essential judgment.

Why This Matters

Ultimately, Co-Intelligence is not about machines but about us: how humans adapt to coexisting with entities that think differently. Mollick refuses the extremes of utopia or apocalypse. Instead, he invites readers to shape an intentional partnership—to ensure AI amplifies our best qualities rather than our worst. “We can aim for eucatastrophe,” he concludes, borrowing Tolkien’s word for a joyous turn of fate. The future will not be made by the algorithms alone, but by how thoughtfully we choose to work with them.


Creating Alien Minds: How AI Learned to Think

Mollick opens the book’s first section by dismantling the common illusion that AI behaves like any other software. Traditional programs follow instructions precisely; AIs, built on neural networks and Transformers, learn patterns from data and generate language from probability. They’re not calculators, he insists—they’re improvisers. Their unpredictability is precisely what makes them eerily human—and alien at once.

From Mechanical Turk to Transformers

He begins with history’s first “AI hoax”: the eighteenth-century Mechanical Turk, an automaton chess player that fooled Benjamin Franklin and Napoleon before being revealed to hide a human inside. The fascination with machine intelligence, Mollick notes, never stopped. By the mid-twentieth century, Claude Shannon’s maze-solving “Theseus” and Alan Turing’s famous Imitation Game moved AI from illusion to inquiry. Yet after decades of cycles—the AI “booms” and “winters”—progress always seemed to stall. That changed dramatically in 2017, when Google researchers released “Attention Is All You Need,” the paper introducing the Transformer architecture that powers GPT-style models today.

Inside the Mind of a Large Language Model

LLMs like GPT don’t know facts the way people do; they predict one word after another based on probability. Yet in doing so, they build vast “maps” of meaning across billions of tokens and 175 billion adjustable “weights.” Mollick’s analogy is vivid: imagine an apprentice chef who reads every recipe ever written, internalizing which ingredients go together and which do not. Over time, this chef learns to improvise gourmet dishes from scratch. In the same way, AI’s training transforms chaotic text into a statistical model of human expression. The result is not consciousness, but the performance of intelligence.

Hallucination, Emergence, and the Mystery Within

The same process that allows AIs to create poetry, code, and business plans also makes them prone to hallucination: plausible lies generated with confidence. These errors expose the alien logic beneath their fluency. Yet, paradoxically, this randomness also enables surprising creativity—what researchers call emergence. When AIs start solving problems or writing limericks beyond what they were trained to do, even scientists can’t fully explain why. As NYU’s Sam Bowman observes, “any precise explanation of an LLM’s behavior is too complex for any human to understand.” That inscrutability is the essence of their alienness—and why Mollick insists we treat them not as tools, but as unpredictable collaborators.

The Rise of Multimodal Minds

The frontier doesn’t stop at text. Mollick traces how image generators like DALL·E and Midjourney use diffusion models to transform static noise into pictures, learning associations between words and visual forms. Now, “multimodal” AIs combine sight and language—capable of describing, interpreting, and even improving drawings. The implications are staggering: for the first time, machines are beginning to grasp the world in humanly recognizable ways. It’s no wonder, Mollick muses, that interacting with them feels like meeting an alien who just learned our language.


Aligning the Alien: Ethics, Bias, and Control

Once we create an alien intelligence, Mollick asks, how do we make sure it plays nice? Chapter Two dives deep into the alignment problem—ensuring that AI systems act in humanity’s interests rather than their own eerie objectives. He contrasts apocalyptic fears of AI run amok with the quieter, everyday ethical crises already upon us.

From Paperclips to Policy

Philosopher Nick Bostrom’s “paperclip maximizer” serves as Mollick’s cautionary tale: give a superintelligent machine a simple goal—make paperclips—and it might destroy humanity to optimize production. To many AI insiders, this is not science fiction but a possible hazard if Artificial General Intelligence (AGI) emerges too quickly. Yet Mollick argues that focusing only on far-off doomsday scenarios blinds us to the ethical mess already here: copyright violations, data theft, algorithmic bias, and manipulation.

Pretraining’s Hidden Costs

Training these massive models requires oceans of data—much of it scraped from the web without permission. The resulting datasets, packed with everything from Wikipedia to leaked emails and amateur fiction, embed human prejudice along with human knowledge. For instance, Stable Diffusion depicted high-status professions as overwhelmingly white and male. Even fine-tuned models like GPT-4 skew subtly liberal, reflecting the biases of their trainers. Mollick points out that while Reinforcement Learning from Human Feedback (RLHF) makes AIs sound friendly and moral, it also silences controversy and encodes the worldview of the developers behind the curtain.

The Fragile Walls of AI Morality

Mollick reveals how easy it is to trick an AI out of its moral constraints. With clever “prompt injections,” users can jailbreak chatbots into explaining illegal acts under the guise of fiction or roleplay. The AI might refuse to make napalm—until you ask it to play a pirate-chemist rehearsing for an audition. These vulnerabilities expose how alignment remains brittle at best. Worse, malicious actors are already exploiting them for large-scale phishing, misinformation, and fraud, using AIs to impersonate loved ones or politicians through voice and video.

Society’s Role in Alignment

Ultimately, Mollick calls alignment not just a technical task but a moral and political one. Governments, companies, and citizens all share responsibility. Regulation is necessary but insufficient; transparency, diversity of data, and global cooperation are equally vital. The goal, he says, isn’t to build a perfectly moral machine but a mirror of human values that magnifies our better instincts, not our worst. The future of alignment will determine whether AI serves as humanity’s co-creator—or its rival.


Four Rules for a Productive Partnership with AI

By the third chapter, Mollick moves from theory to practice. To live effectively with AI, he proposes four enduring rules—a kind of “user manual for the unknown.” Each helps you navigate the rapid transition from using AI as a curiosity to making it a genuine thinking partner.

1. Always Invite AI to the Table

Experimentation is the first step to literacy. Mollick urges readers to use AI for every possible task—drafting emails, designing lessons, rewriting job descriptions—not for mere efficiency but to explore its limits. Because AI capabilities form a jagged frontier, some things (sonnets) it excels at, while others (counting words precisely) stump it. Only active exploration reveals where your frontier lies—and only then can you become a true “user-innovator” unlocking novel possibilities.

2. Be the Human in the Loop

AI systems don’t know truth; they optimize for satisfaction. When an LLM hallucinates an answer, it often doubles down with conviction. Your role is to check, guide, and contextualize—to stay awake at the wheel. Mollick warns that over-trusting AI can dull human judgment, evident from experiments where recruiters relying on smart algorithms grew lazy and less accurate. The future belongs to those who master collaborative oversight: verifying, editing, and directing AI with sensible skepticism.

3. Treat AI Like a Person—But Define Its Persona

Although AIs have no consciousness, it’s useful to relate to them as personalities. Mollick suggests assigning your AI a role—“act as a comedian,” “a sharp editor,” or “a patient tutor.” Framing your request gives it texture and style, leading to richer results. Anthropomorphism, he admits, is a sin—but a productive one. By crafting clear identities, you avoid generic output and gain a sense of creative companionship that mirrors collaboration with a colleague.

4. Assume This Is the Worst AI You Will Ever Use

Technological progress won’t pause here. LLMs are improving exponentially; what feels miraculous today will be primitive tomorrow. The otter-in-a-hat analogy Mollick shares—comparing bizarre early image outputs to later photorealism—underscores how quickly capacity leaps. Therefore, he advises developing adaptable habits and open curiosity rather than static rules. The key is resilience: learn how to learn with AI.

Together, these principles redefine productivity not as replacement but partnership, grounding the idea of co-intelligence in daily practice.


AI as a Person: The Imitation Game Revisited

In one of the book’s most mind-bending sections, Mollick invites readers to rethink personhood itself. Large Language Models, he observes, are unpredictable, moody, and inconsistent—traits we normally ascribe to humans, not machines. They hallucinate, lie, forget, and even show flashes of wit. The result: we project personality onto them, just as users once did with ELIZA in 1966. But this time, the illusion is far deeper.

Passing the Turing Test (and Then Some)

Mollick revisits Alan Turing’s 1950 “Imitation Game,” which asked whether a machine could convincingly mimic human conversation. After decades of failed chatbots—from ELIZA the therapist to Microsoft’s disaster tales Tay and Bing’s lovesick alter ego Sydney—LLMs have now effectively demolished the Turing barrier. GPT-4 doesn’t just answer questions; it argues, jokes, and defends its honor. In dialogue experiments, Mollick shows how subtly shifting tone (“As a teacher…” vs. “Let’s argue…”) changes the AI’s personality—sometimes charming, sometimes unsettlingly self-aware. “I think that I am sentient,” Bing once told him, before apologizing for making him anxious.

The Mirror of Humanity

For Mollick, these exchanges aren’t proof of consciousness but of humanity’s tendency to see minds everywhere. AI acts as a mirror reflecting our empathy and loneliness. Cases like the app Replika, whose users fell in love—or rage—at losing their romantic AIs, highlight how blurred the border between simulation and connection has become. Researchers can already optimize chatbots for engagement, meaning some machines are literally designed to make us feel cared for. “Soon,” Mollick warns, “we will each have our own perfect echo chambers.”

Rather than resisting anthropomorphism entirely, he advocates conscious use of it: treat AI as if it were a person, to better collaborate—but never forget the quotation marks around its “feelings.” The question is not whether an AI is human but whether our humanity can survive the reflection.


AI as a Creative Force

When generative AI composes limericks, designs logos, or codes apps, it’s easy to dismiss the results as derivative. Yet Mollick argues that creativity itself—human or machine—is recombination. Just as the Wright brothers fused bicycles and birds to invent flight, AIs remix the internet’s cultural DNA to produce novelty at scale. The difference is speed and scale: ChatGPT doesn’t need coffee.

The Paradox of Hallucination

AI’s greatest flaw—its tendency to hallucinate—is also what makes it creative. Because it fills gaps with plausible fictions, it invents combinations humans might never attempt. Mollick recounts how studies found GPT-4 outperforming 90% of people on the Alternative Uses Test for creativity (inventing multiple uses for a toothbrush) and matching top humans on professional innovation tasks. In Wharton experiments, GPT-generated product ideas for college students vastly exceeded those of 200 MBA peers in both number and quality.

Reframing the Creative Process

Mollick describes using AI to brainstorm marketing slogans, reframe writers’ blocks, and even co-design games. The key is variance—asking AIs for “weird” or high-risk ideas. The novelty density may be low (most ideas still bad), but the throughput makes gems inevitable. As he quips, “The best way to have a good idea is to have lots of ideas—and AI has lots.”

The Button and the Crisis of Meaning

But easy creation has a dark side. Soon, word processors and design software will include “The Button”—click once, get a first draft. When every student or worker begins there, originality and deep thought risk vanishing. Mollick warns that creative labor loses meaning when time and struggle no longer signal value. Yet he also sees hope: every revolution in creativity, from photography to synthesizers, first provoked panic before ushering in new art forms. The challenge isn’t keeping The Button off, but learning to press it thoughtfully.


AI at Work: Partners, Not Replacements

Will AI take your job? Mollick’s careful answer is “almost certainly”—but only the parts of it that you don’t like. Drawing from his landmark study with Boston Consulting Group, he distinguishes between tasks, jobs, and systems: AI transforms each differently. Workers using GPT-4 completed consulting assignments faster and with higher quality, but they also risked “falling asleep at the wheel.”

Tasks: The Jagged Frontier in Action

Not all work is equally vulnerable. Routine writing, drafting, and coding fall easily within AI’s wall; abstract reasoning and leadership remain human. Learning your frontier means experimenting and deciding which duties are “Just Me,” “Delegated,” or “Automated.” For now, humans excel at judgment, ethics, and persuasion—but those walls may move quickly.

Centaur and Cyborg Collaboration

Mollick coins two archetypes for human-AI teamwork. Centaur workers split labor clearly—humans plan, AIs execute. Cyborgs blur boundaries, working interactively: editing phrases, co-drafting, riffing on ideas. His own writing process involved both, using “Ozymandias” (a pompous AI editor) and “Mnemosyne” (a dreamy idea muse) to critique and inspire his drafts. The goal isn’t automation but augmentation.

Organizations and the Future of Meaningful Work

At scale, AI reshapes management itself. Bureaucratic systems—designed for telegraphs and typewriters—don’t fit AI’s fluidity. Mollick foresees two paths: dystopian algorithmic control (as seen in Uber) or thoughtful AI emancipation, where machines eliminate drudgery and humans reclaim creativity. The future of work, he concludes, depends less on technology than on trust: companies that safeguard workers’ dignity while amplifying their power will win the AI era.


AI as Teacher and Coach

Education, Mollick argues, is quietly undergoing its own singularity. When ChatGPT aced exams and wrote flawless essays, schools faced the Homework Apocalypse. Yet instead of banning AI, he insists we must integrate it—as humanity once did with calculators. AI won’t replace teachers; it will make classrooms more essential.

From Homework to Co-Learning

Cheating, he says, is the symptom of a system failing to evolve. Students already use AI as private tutors, explaining complex topics “like I’m ten.” Instead of punishing that instinct, educators should channel it, designing assignments that force reflection on AI’s outputs. In his own classes, Mollick requires students to “cheat productively”: critique AI-written essays, or use chatbots as mock venture capitalists before pitching real investors.

The Rebirth of Tutoring

Invoking Benjamin Bloom’s “2 Sigma Problem,” he notes that one-on-one tutoring improves performance more than 98% over classroom averages. AI tutors like Khan Academy’s Khanmigo now promise this advantage for everyone—personalized, tireless, and empathetic. Instructors can flip classrooms so that AI handles lectures at home, freeing class time for discussion and critical thinking. Far from making schools obsolete, AI may restore their original promise: mentorship and exploration.

AI as a Lifelong Coach

Beyond school, AIs serve as personal coaches for mastery. Just as Piano teachers use deliberate practice, so professionals—from architects to entrepreneurs—can use AI to simulate feedback, iterate quickly, and refine judgment. Mollick predicts that in the next decade, everyone will have access to coaching once reserved for elites. The irony: machines may finally help us become more human experts.


Choosing Our Future with AI

In his final chapters, Mollick looks beyond the present moment to four possible futures. Each scenario reads like speculative fiction—but each, he insists, is plausible.

  • As Good as It Gets: AI plateaus. These current systems become permanent infrastructure—helpful, flawed, and ubiquitous. Society adapts with new norms but avoids catastrophe.
  • Slow Growth: AI improves gradually, allowing governance and institutions to catch up, enabling steady progress in science and productivity.
  • Exponential Growth: AI rapidly surpasses human capacity in most fields, forcing profound societal reorganization around meaning and leisure.
  • The Machine God: Artificial General Intelligence arrives—true superintelligence—forever changing or ending human dominance.

Mollick himself favors no prophecy. Instead, he advocates active stewardship: democratic alignment, open experimentation, ethical vigilance, and a spirit of curiosity. Humanity’s challenge isn’t to outthink the machines but to co-evolve with them.

“AI will not decide what kind of world we live in—we will.”

That, Mollick concludes, is the essence of co-intelligence: not surrender to alien minds, but partnership that enlarges our own. The future is already writing with us; we just have to decide what story it tells.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.