Power And Prediction cover

Power And Prediction

by Ajay Agrawal, Joshua Gans & Avi Goldfarb

Explore how AI''s advancements in prediction revolutionize decision-making processes in ''Power And Prediction.'' This insightful book illustrates the dynamic partnership between AI and human judgment, shaping industries and empowering informed, strategic decisions.

Power and Prediction: How AI Reshapes Decisions and Systems

What if artificial intelligence could revolutionize your business as profoundly as electricity did for the modern world—but it has not yet reached its full potential? In Power and Prediction, Ajay Agrawal, Joshua Gans, and Avi Goldfarb make the bold argument that we are living in what they call The Between Times: the period between AI’s demonstration of capability and its widespread adoption. The authors contend that the current bottleneck isn’t the technology itself—it’s our systems, structures, and decision-making frameworks that haven’t been redesigned to take advantage of AI’s predictive power.

Agrawal, Gans, and Goldfarb, economists at the University of Toronto and authors of the earlier book Prediction Machines, extend their framework to explain not just how AI lowers the cost of prediction but also how that shift transforms entire systems. Their core claim is simple yet potent: AI is prediction technology. But the true economic and organizational transformation will happen only when we redesign systems to integrate AI’s predictive capabilities at scale. Like the transition from steam to electricity, AI will create initial small benefits through point solutions—but the real productivity gains come when new systems arise around those predictions.

From Hype to Systems

Many organizations see AI as a tool for isolated improvements—better recommendations, fraud detection, or logistics optimization. These are what the authors call “point solutions.” They deliver incremental benefits but do not rewire how decisions are made. The book argues that lasting transformation comes from system solutions, which reshape interconnected decisions, workflows, and business models. This requires thinking beyond replacing human tasks to reconsider the entire architecture of how decisions are produced, coordinated, and scaled.

To help us see this clearly, the authors tell the story of Verafin, a Canadian company that became the nation’s first AI unicorn—not from trendy cities like Toronto or Montreal, but from St. John’s, Newfoundland. Verafin succeeded not because of cutting-edge research, but because its AI fit smoothly into banks’ existing systems for fraud detection, where prediction was already crucial. Yet many other high-potential sectors, such as radiology or manufacturing, struggle because their legacy systems cannot easily integrate automated prediction. This contrast sets up the book’s central question: What must change for AI to realize its promised power?

The Between Times

We’ve seen this gap before. Electricity was invented in the late 19th century but took decades to transform industry. Initially, entrepreneurs replaced steam engines with electric motors—small gains. True disruption came when factories were redesigned around the decentralized power source of electricity, enabling assembly lines and new workflows. The same pattern is unfolding with AI. Early adopters are substituting old analytics with machine learning, but few have redesigned their systems—organizational, regulatory, and infrastructural—to fully leverage prediction. The authors call this historical phase “The Between Times” and argue that we are just beginning the long evolution toward system-level adoption.

AI as Prediction and Judgment

AI’s essence, the authors insist, is prediction—the conversion of information you have into information you need. When predictions become cheap, organizations make more of them. However, prediction alone isn’t a decision. Decision-making also requires judgment: the weighting of outcomes, values, and consequences. This distinction underpins one of the book’s most insightful ideas—the decoupling of prediction and judgment. When prediction passes from humans to machines, the locus of judgment can shift—to different people, teams, or even centralized committees. This shift reconfigures power dynamics inside organizations and industries.

A simple example: in banking, machine learning predicts fraud, but executives must decide what threshold defines “too risky.” In radiology, AI can spot probable tumors, but human doctors still decide whether to treat. In both cases, who controls judgment determines power. As prediction grows cheaper and faster, judgment—not prediction—becomes the scarce resource. This creates new roles, responsibilities, and struggles over authority.

Power, Resistance, and Disruption

The authors apply economic lenses to explore how AI reshapes power—within companies, across industries, and among individuals. They dissect disruptions from past technological revolutions to show that incumbents often resist system-level change that threatens their roles. Blockbuster resisted Netflix, just as hospitals may resist diagnostic AI or public offices may resist data-driven decision-making. Organizational “glue”—rules, procedures, and habits—can hold systems together so tightly that change becomes nearly impossible until outsiders demonstrate new models.

To thrive in the AI era, you must learn to think in systems. Decisions don't exist in isolation—they interact, depend, and cascade. The authors teach business leaders to map these dependencies with tools like the AI Systems Discovery Canvas, which helps users identify key decisions, predictions, and tradeoffs in their organization. By envisioning a blank slate, companies can design entirely new systems where prediction enables efficiency, personalization, and innovation.

Why It Matters

At its heart, Power and Prediction is both an economic framework and a call for imagination. AI’s real revolution won’t come from better algorithms but from new organizational forms built around decisions. By understanding prediction, judgment, rules, systems, and power, you can anticipate where disruption will strike, what resistance will arise, and how to design for reliability. The authors argue that if electricity decoupled energy from its source, AI decouples prediction from human judgment—and that decoupling will transform everything from hospitals and classrooms to factories and financial institutions. The question isn’t whether AI will change the world, but whether you’ll recognize the change when it arrives—and whether your system is ready for it.


The Parable of Three Entrepreneurs

To explain how revolutionary technologies like AI spread, Agrawal and his coauthors use a vivid historical analogy: the rise of electricity and three types of entrepreneurs who tried to commercialize it. Their parable illustrates the path we can expect with AI—from early point solutions to complex system-wide redesigns.

Point Solutions: The Plug-and-Play Phase

In the late 19th century, steam powered factories, driving the Industrial Revolution. When electricity first appeared, most innovators simply swapped steam for electric motors at single points of use. These were called point solutions: a textile mill or elevator powered by electricity rather than steam. The advantage was small—cleaner, cheaper, maybe more efficient. But the rest of the factory system remained the same. These entrepreneurs didn’t need to redesign workflows or spatial layouts, just replace one component. Electric motors were like the Verafin AI for banking—fitting neatly into places that already relied on machine predictions.

Application Solutions: Redesigning the Device

Next came the application solution entrepreneurs who realized that electric power enabled new types of machines entirely—portable tools and individual motors. This had deeper implications for factory design but still didn’t rebuild the overall system. It made the work more modular and flexible, much like today’s AI-powered products that improve user experiences without changing the organizational structure (think self-driving cars or personalized smartphone apps).

System Solutions: Reinventing Everything

The true transformation came when entrepreneurs imagined what a factory should look like if designed from scratch with electricity. No longer constrained by central shafts and belts, they could build flat, single-level factories with optimized flows, lighting, and safety. Henry Ford’s assembly line represented this new system solution. Electrification finally boosted productivity, changed cities, and shifted economic power to those controlling grids and mass production. The parallels with AI are striking: initial point solutions (better fraud detection), intermediate applications (smart cars and apps), and future system-level redesigns (new industries built around machine decision-making).

AI’s Three Waves

The authors transpose this parable onto AI adoption. The first wave consists of point solution entrepreneurs who swap old prediction tools for AI, like Verafin or recommendation algorithms. The second wave builds applications, reimagining products with embedded AI—autonomous vehicles, smart assistants, or adaptive robots. The final wave includes system solution innovators who reconstruct entire industries around machine prediction—new healthcare networks, personalized education systems, or AI-centric supply chains.

Key takeaway:

Technological revolutions don’t transform societies by replacing old parts but by redesigning systems around new capabilities. AI won’t just lower the cost of prediction—it will enable the creation of vastly more productive organizational designs.


AI’s System Future

Agrawal, Gans, and Goldfarb argue that just as electricity’s true potential emerged only when its system-level benefits were understood, AI will also require the reengineering of entire organizations and industries. They call our current period “The Between Times”—a paradoxical moment of dazzling AI advancements but low productivity returns.

From Paradox to Pattern

Economist Robert Solow once joked, “We see computers everywhere except in the productivity statistics.” The same paradox now applies to AI. Despite impressive breakthroughs—companies deploying chatbots, image recognition, and predictive analytics—global productivity growth has slowed. According to the authors, this isn’t a failure of technology but of systems. A small fraction of firms derive major benefits because their workflows already align with prediction-based decisions. Most others can't—yet.

Three Types of AI Solutions

The authors formalize three tiers of AI deployment:

  • Point solutions: Improving an existing decision (e.g., better fraud detection).
  • Application solutions: Enabling a new decision without changing the whole system (e.g., recommending products).
  • System solutions: Requiring changes to dependent decisions—industry-level transformations (e.g., reconfigured supply chains or healthcare systems).

The heavy lifting happens with system solutions, which are economically dependent—not viable unless multiple processes change together. Just as factories couldn’t electrify without redesigned layouts and grids, AI systems need interconnected decisions to evolve together.

Disruption Through System Change

System change is inherently disruptive. When agriculture adopted predictive weather modeling, it centralized farm management. Similarly, as prediction moves across industries, power shifts—some roles diminish, others emerge. The authors urge readers to anticipate this disruption rather than resist it. Entrepreneurs and leaders who design systems capable of adaptation will define the AI revolution, much like Henry Ford defined the automotive one.


AI Is Prediction Technology

At the heart of Power and Prediction lies the assertion that AI is simply—and profoundly—a prediction machine. It converts known information into unknown probabilities, reshaping decision-making at every level.

Prediction, Judgment, and Data

The authors distinguish prediction from other decision inputs. Prediction estimates the likelihood of outcomes; judgment values those outcomes; and data informs both. In every decision—from detecting fraud to diagnosing disease—these three components interact. AI lowers the cost of prediction, causing organizations to make more decisions that rely on it, while increasing the value of judgment and data as complements.

Correlation vs. Causation

The authors warn that many misuse AI’s predictions by confusing correlation with causation. For instance, toy sales correlate with advertising—but the true driver is Christmas demand. Prediction machines spot patterns, not causes. To truly transform industries, leaders must pair AI with causal inference—the statistical science that uncovers “what happens if we change X.” (Economists Guido Imbens and David Card, Nobel Prize winners, advanced these methods and apply them at Amazon.)

Case Studies

Through vivid examples—Verafin’s fraud detection, Amazon’s recommendation engine, and autonomous driving—the book shows that successful prediction integration depends on system design. When Amazon’s predictive accuracy became high enough to consider shipping before orders (“ship-then-shop”), the limiting factor wasn’t prediction—it was logistics, specifically returns. Without system change, even superior AI remained impractical. The lesson: the hardest part of AI adoption isn’t computing—it’s coordination.


Decision-Making: Rules Versus Choices

Herbert Simon once said humans “satisfice”—we settle for good enough decisions rather than perfect ones. The authors use this behavioral insight to explain why AI’s low-cost predictions push organizations from simple rules toward dynamic decision-making.

Why We Prefer Rules

Rules simplify life. Barack Obama wore only blue or gray suits to avoid wasting decision energy; Steve Jobs bought identical black turtlenecks; organizations rely on standard operating procedures. Rules lower cognitive costs and increase reliability. But they also hide uncertainty. When data becomes cheap through AI, the balance shifts—decisions become worthwhile again because information is abundant.

When AI Breaks Rules

Introducing AI into rule-bound environments can reduce reliability. A prediction may outperform a rule locally but destabilize a system globally. For example, an AI that forecasts weekly demand might disrupt suppliers who rely on steady orders—the “AI bullwhip” effect. The takeaway? Great predictions don’t automatically produce great systems unless redesigned to accommodate new uncertainty.

In short:

AI shifts the equilibrium between rules and decisions, liberating action from routine but demanding new system architectures to manage reliability and coordination.


Hidden Uncertainty and Organizational Blind Spots

The authors illustrate how uncertainty gets hidden—or institutionalized—in systems, using airports as a metaphor. We arrive early for flights not because we love airports, but because we can’t predict traffic or security delays. The result? Airports turn waiting into a business, building shops and waterfalls to monetize latency. This “hidden uncertainty” is precisely where AI can add value—and where resistance will be strongest.

Eliminating Hidden Costs

AI tools that predict traffic, wait times, or scheduling could eliminate much of the “airport buffer zone.” Yet airports might resist—uncertainty keeps travelers captive to retail spending. Similarly, industries that profit from inefficiency (think health administration, legal filing systems, or traditional insurance) may resist AI that exposes their hidden costs.

Guardrails and Hedgerows

Organizations often build “hedges”—physical, procedural, or cultural guardrails—to manage risk. British farms built hedgerows to contain animals and soil but lost efficiency for mechanization. Over time, these protections create overengineered solutions that obscure true risk. AI’s predictive capacity can reveal this hidden uncertainty, allowing systems to redesign around real probabilities rather than perceived safety nets.

By identifying and quantifying uncertainty, AI provides an opportunity to reveal inefficiencies baked into rules—but replacing those structures requires courage to rethink entire workflows.


Disruption, Power, and Resistance

Every new general-purpose technology reshapes power structures. Agrawal, Gans, and Goldfarb explore how AI changes who holds influence within organizations and industries, emphasizing that power moves when judgment moves.

Economic Power Defined

Power arises from scarcity. Those controlling scarce resources—capital, data, or judgment—gain economic advantage. AI shifts where scarcity lies. As prediction becomes cheap, scarce judgment determines competitive superiority. Industries where data centralizes, like search or insurance, concentrate power; industries with distributed judgment, like teaching or caregiving, decentralize it.

Resistance from the Old Guard

Incumbents often block system-level innovations that threaten their established hierarchies. Blockbuster Video’s franchise owners resisted Netflix’s subscription model because it reduced late fees—40% of revenue. Hospitals and universities may resist predictive AI that flattens hierarchies or exposes inefficiencies. The authors describe this as “glue”—rules and incentives binding complex organizations. New entrants with “blank-slate” systems are freer to innovate.

The Mechanism of Disruption

Disruption occurs when AI requires architectural change that incumbents can’t or won’t make. Once a challenger builds a new system optimized for prediction-driven decisions, it gains defensible advantages. The race to build such systems defines modern competition, and understanding how power redistributes inside those systems determines who wins in the age of AI.


The AI Systems Discovery Canvas: Reimagining Organizations

To help leaders move from rules to systems, the authors introduce the AI Systems Discovery Canvas, a practical tool for reimagining industries from a blank slate. The canvas asks you to identify your organization’s mission, its minimal set of decisions, and the predictions and judgments that drive them.

Building from First Principles

The exercise begins by stripping away legacy rules to clarify the fundamental mission—say, “providing peace of mind against catastrophic loss” for insurance. From there, categorize the few essential decisions (marketing, underwriting, claims), the predictions they require (risk, customer value, fraud probability), and the errors that could occur. This method surfaces opportunities where AI can improve decisions and exposes interdependencies that require system redesign.

Case Example: Insurance Reinvention

Insurance firms historically transfer risk; with AI, they can help mitigate it. Predicting sub-perils—like electrical fires or leaky pipes—allows insurers to share actionable insights with clients, reducing risk before claims occur. This system-level shift changes incentives, aligning insurer and customer interests. Yet it threatens agent commissions and entrenched revenue models, demonstrating why transformation often meets internal resistance despite clear logical benefits.

Design Thinking for AI

The canvas invites leaders to conduct blank-slate analyses of any sector—healthcare, logistics, education—to uncover how prediction could rewire decisions. It’s not about inserting AI into the old system, but designing a new one optimized around better prediction, smarter judgment, and reduced friction between information and action.


AI Bias and Systems Thinking

In a powerful final discussion, the authors challenge the dominant narrative that AI perpetuates discrimination. They argue that while algorithms can inherit human biases, they also present the best opportunity to detect and correct them—precisely because software is scrutable and updatable, unlike human judgment.

Detecting Bias

Economist Sendhil Mullainathan’s studies exemplify this. His experiments with racially biased hiring and medical AI show that algorithmic bias is easier to quantify and fix than human prejudice. In medicine, algorithms revealed that diagnostic standards developed for white patients ignored key pain markers in others; AI identified overlooked patterns, significantly improving fairness in treatment prediction.

Fixing Systems, Not Just Code

Bias stems from systems as much as data. When Amazon built a recruiting algorithm, it echoed hiring biases from human history. The failure wasn’t in the math—it was in training data drawn from discriminatory practices. Correcting bias therefore means redesigning the underlying systems, not just patching algorithms. Regulators must shift from outcome-based quotas to treatment-based fairness—ensuring equal processes rather than equal results.

From Black Box to Transparent Machine

Agrawal, Gans, and Goldfarb close on a hopeful note. Future AIs can standardize fairness, create audit trails, and reduce human inconsistency. Like automated speed enforcement, they may face backlash from those who benefit from discretion, but transparent systems will ultimately create more equity. In other words, once you adopt a system mindset, bias becomes not a barrier but an opportunity for design—a chance to make better predictions and better societies.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.