The Leader''s Guide to Managing Risk cover

The Leader''s Guide to Managing Risk

by K Scott Griffith

The Leader’s Guide to Managing Risk by K Scott Griffith offers a groundbreaking approach to navigating the unpredictable threats facing businesses. By integrating resilience into organizational culture and focusing on predictive reliability, this book provides leaders with the tools to manage risks effectively, ensuring sustainable and reliable operations across industries.

Seeing Risk Before It Happens: The Hidden Science of Reliability

How can you prevent disasters—personal, professional, or societal—before they occur? In The Leader’s Guide to Managing Risk, former airline executive and physicist K. Scott Griffith argues that reliability, not luck or intuition, is the cornerstone of safety and sustained performance. He contends that most organizations—and individuals—focus only on what they do well, ignoring the invisible threats lurking beneath their successes. The result: predictable failures that feel unpredictable.

To change this, Griffith introduces what he calls the Sequence of Reliability®, a scientifically grounded approach that shows you how to see, understand, and manage the hidden patterns of risk in systems, people, and organizations. It’s the culmination of decades of work across aviation, healthcare, government, and other high-consequence industries, inspired by one moment that changed his life—the 1985 Delta Flight 191 crash he witnessed firsthand. That catastrophe became a lens through which Griffith saw risk differently: not as random chaos, but as a sequence governed by physics, psychology, and human behavior.

Beyond Success Bias: Why Good Results Mislead Us

Most businesses and individuals celebrate outcomes—the project launch that worked, the flight that landed safely, or the surgery that succeeded—without examining what could have gone wrong. Griffith calls this our collective “blind spot.” He argues that successful outcomes often conceal vulnerabilities that would lead to catastrophic results under slightly different circumstances. As he puts it, “Our risky systems and behaviors produce dividends—until they don’t.” By overvaluing success and ignoring dormant risks, we misunderstand the very processes that create reliability.

In practical terms, this means that a company’s strong quarter could hide systemic weaknesses—a flawed IT system, an exhausted workforce, or untested protocols—that will surface when conditions change. To become truly reliable, Griffith says leaders must flip their focus: devote as much attention to what might fail as to what succeeds. The book’s message isn’t about fear; it’s about foresight.

The Sequence of Reliability®: A Scientific Order for Managing Risk

The book’s framework unfolds through a simple but powerful sequence:

  • Step 1: See and Understand Risk — Develop the vision to perceive what’s invisible. Learn how blind spots, optimism bias, and cultural filters obstruct our ability to see danger.
  • Step 2: Manage Reliability in Order — Address risk through layers: systems, human behavior, and organizations—exactly in that sequence. Focusing on people without fixing broken systems, Griffith warns, is like blaming the pilot for flying through a storm they couldn’t see.

This sequence governs every level of reliability, from the personal habits that build resilience to organizational strategies that prevent billion-dollar failures. Griffith calls reliability a hidden science because it integrates engineering, neuroscience, behavioral psychology, and even ethics. In his view, reliability is about probability—a mathematical reality of how small, everyday vulnerabilities accumulate into disasters if left unacknowledged.

Why It Matters: From Cockpits to Boardrooms

Griffith makes the science accessible through vivid examples: the microburst that brought down Delta Flight 191; NASA’s space shuttle Challenger and Columbia disasters; and healthcare errors like wrong-site surgeries or medication mistakes. Across these worlds, he finds a consistent pattern—organizations assume that the absence of accidents means safety, until probability proves otherwise. “Being accident-free,” he warns, “does not guarantee future success.”

In the same way that aviation evolved from reactive accident investigations to predictive analytics, Griffith urges leaders to transform their organizations into predictively reliable systems. This shift requires collaboration—what he calls Collaborative High Reliability® and Collaborative Just Culture®. These are frameworks for fairness, inclusion, and data-driven improvements that treat errors as learning opportunities rather than grounds for punishment.

From Catastrophe to Collaboration: A Better Way Forward

Griffith’s career bridges science and management. As American Airlines’ chief safety officer, he helped develop the Aviation Safety Action Program (ASAP), a collaborative reporting initiative between airlines, unions, and the FAA that led to a 95 percent reduction in the airline fatal accident rate. His model turned punishment-based regulation into partnership-driven foresight. Later applied to healthcare, energy, and law enforcement, the same approach taught organizations to surface risks before harm occurred.

“Our risky systems produce dividends—until they don’t.”

Griffith reminds you that being good at what you do isn’t enough; you must also be good at what you don’t do well. Reliability isn’t perfection—it’s the capacity to anticipate, adapt, and recover when things inevitably go wrong.

The Promise of Predictive Reliability

Ultimately, Griffith redefines what it means to lead. A great leader isn’t merely visionary; they’re reliable. They build organizations that sustain success over decades, not just quarters. Predictive reliability—the ability to foresee and manage risk before catastrophe—helps companies prevent employee burnout, ethical breakdowns, and systemic collapse. It’s a model for modern leadership rooted in science, empathy, and collaboration.

If you’ve ever wondered why “accidents seem to happen out of nowhere,” Griffith’s answer is clear: they don’t. The patterns are there to see, if you know how to look. This book teaches you to see them, measure them, and manage them—in your business, your community, and your life.


The Sequence of Reliability® Framework

Griffith’s signature contribution—the Sequence of Reliability®—is both elegant and rigorous. It’s a stepwise system for managing risk that mirrors the scientific method: observe, analyze, test, and refine. What’s revolutionary is the order in which he insists leaders must act: systems first, humans second, organizations third. Getting this sequence wrong, he argues, is what keeps most companies stuck in cycles of failure and reaction.

Step 1: Seeing and Understanding Risk

This begins with awareness—not just of visible hazards, but of the invisible ones. Griffith uses the Iceberg Model to describe how organizations usually see only the “tip”—accidents, data breaches, product failures—while the far larger mass of potential risk lies submerged. He draws examples from Facebook’s privacy scandals and Apple’s supply chain challenges to show how even global enterprises mismanage what they can’t see. Your task as a leader is to develop risk intelligence: the ability to perceive likelihood, severity, and consequence before failure happens.

Step 2: Managing Systems

Once you understand risk, you start with systems—the physical, procedural, and digital environments that shape behavior. Griffith explains system reliability through engineering concepts like barriers (preventing failures), redundancies (creating backups), and recoveries (responding when things go wrong). In aviation, redundant hydraulics and one-way fuel valves keep planes safe. In hospitals, redesigning alarms can prevent deaths. System reliability is the scaffolding for everything else; without it, people will fail even if they try their best.

Step 3: Managing Humans

Human reliability means recognizing that people are fallible—and designing systems that anticipate those fallibilities. Griffith distinguishes between human error (inadvertent mistakes) and at-risk choices (intentional but misjudged behaviors, like speeding). While human errors are best mitigated through system design, at-risk choices require coaching, feedback, and culture change. His stories—from Steve Irwin’s overconfidence to hospital nurses distracted mid-surgery—illustrate how good intentions still produce harm when risk isn’t understood.

Step 4: Managing Organizations

At the organizational level, reliability becomes cultural. Griffith urges leaders to balance competing values—such as safety, cost, and diversity—while keeping systems and people aligned. He compares companies like Uber, which lost trust through cultural failure, to NASA, whose space shuttle disasters stemmed from overlooking social and organizational influences. The key is to manage perceptions of risk across departments and hierarchies. An organization is only as reliable as its ability to see its own blind spots.

Step 5: Predicting and Preventing Future Failures

Predictive reliability—the pinnacle of the sequence—combines data, probability modeling, and human judgment. Griffith introduces the concept of fault trees, mathematical models that show how multiple small errors combine into catastrophic events. This approach, adapted from nuclear engineering, helps leaders design systems that are resilient by expectation, not reaction. The more you apply this sequence, the more you transform your organization from accident-prone to anticipatory.

Applied correctly, the Sequence of Reliability® builds a bridge between prevention and performance. It’s not just about avoiding accidents—it’s about creating sustainable excellence. As Edwards Deming (the quality management pioneer whom Griffith cites frequently) demonstrated, systems thinking transforms cost, quality, and human engagement all at once. Griffith extends this logic into modern complexity: risk management becomes the foundation of long-term reliability, resilience, and trust.


Human Reliability: The Science of Fallible Choices

Griffith’s exploration of human reliability begins with a timeless truth: humans make mistakes. But he insists the real danger isn’t error—it’s choice. The decisions we make without recognizing risk are the seeds of catastrophe. His analysis of human fallibility blends cognitive science, behavioral psychology, and real-world examples, turning abstract risk into deeply human stories.

Human Error vs. At-Risk Choice

Griffith distinguishes between inadvertent mistakes (like misreading a signal) and intentional but misguided choices (like texting while driving). A simple slip can be prevented with system design; at-risk choices require understanding motivation, perception, and context. For instance, the Metrolink train collision in Los Angeles happened because an engineer was texting—a choice repeated safely many times before until conditions aligned for disaster. The takeaway: repetition reinforces false safety. Until failure occurs, choice feels justified.

Thinking Fast and Slow

Drawing from Daniel Kahneman’s famous “System 1 and System 2” model, Griffith explains how our brains toggle between instinctive and deliberate thinking. We rely on System 1 for routine tasks—driving, typing, multitasking—but that autopilot mode creates blind spots. Reliability improves when we consciously switch to System 2, slowing down when risk increases. In practice, this might mean pausing before hitting “send,” double-checking a medication, or deciding to stop at a yellow light instead of racing through. (He likens this to engineers using deliberate testing before relying on automation.)

Managing Behavior Through Design and Culture

You can’t train humans to perfection. Instead, Griffith urges designing systems that compensate for their limits. Barriers, redundancies, and recoveries—like two-factor authentication or secondary medical verification—reduce error probability. Culture reinforces these safeguards. Peer observation (“watching eyes” studies) show that people follow rules more when they know they’re seen. The challenge is creating cultures of transparency without fear, where coaching replaces punishment. In his Collaborative Just Culture® model, fairness and learning replace blame, allowing people to report mistakes safely.

From Punishment to Collaboration

Historically, organizations responded to failure by punishing individuals—an approach Griffith calls “tombstone mentality.” Aviation and healthcare learned that scolding doesn’t fix systems. His alternative, inspired by the Model Penal Code’s hierarchy of culpability, categorizes behavior: human error deserves support, at-risk choices require coaching, and reckless choices merit discipline. Justice, he argues, should serve reliability, not retribution. When people trust that honesty won’t cost their career, organizations finally see risk before injury.

Human reliability, Griffith concludes, emerges when leaders blend engineering with empathy. You can’t stop all human mistakes, but you can design environments where people recover quickly, learn deeply, and act wisely. Risk can’t be eliminated—but it can be humanized.


System Reliability: Building Resilient Structures

Griffith regards system reliability as the foundation of all stability—everything from airplanes and hospitals to families and schools. Systems fail not because they’re inherently bad but because they’re designed without accounting for degradation, capacity, or external change. Understanding these influences lets you fortify systems before they collapse.

Design Determines Destiny

Every system starts with design specifications—its intended purpose, limits, and constraints. A car’s tires, a surgical checklist, or a data firewall each embodies design trade-offs. Griffith reminds you that “you will get no better results than the limits of your system design.” Pilots can’t fly a plane with faulty hydraulics, and managers can’t succeed with broken processes. To become effective and resilient, systems must balance functionality with failure anticipation.

Barriers, Redundancies, Recoveries

The triad of reliability design appears again here. Barriers reduce risk (like speed limits or safety gates); redundancies provide backups (dual engines, multi-layer firewalls); recoveries repair damage (reboot protocols, emergency override). Griffith’s hospital story—where a simple software timer could have rebooted a cardiac monitor—shows how small design tweaks save lives. Similarly, one-way gas pump valves prevent explosions when drivers forget hoses attached. These examples remind you that good systems protect people from themselves.

Managing Degradation and Load

Systems naturally deteriorate—hardware wears down, software becomes outdated, policies lose relevance. This degradation, coupled with excessive load or resource mismatch, undermines reliability. Griffith’s lessons apply everywhere: overloaded classrooms hurt learning, overburdened grids cause power crises, and outdated protocols fail under pressure. Preventive maintenance, updates, and adaptability keep systems effective over time. The cost of failing to upgrade is almost always higher than the cost of prevention.

System reliability doesn’t mean perfection. It means building in anticipation—accepting that failure will happen but designing for quick recovery. As Griffith puts it, “We may not predict when people will err, but we can predict how and where they’ll fail.”


Predictive Reliability: Seeing the Future of Risk

Preventing what hasn’t happened yet sounds impossible—until Griffith shows you how predictive reliability works. This concept, rooted in probabilistic risk modeling, brings science to foresight. It’s about recognizing patterns, probabilities, and sequences before catastrophe strikes, transforming trial and error into anticipation and design.

From Investigations to Predictions

Most organizations start managing risk through accident investigations, audits, and compliance checks. Griffith argues these are necessary but insufficient—they’re reactive snapshots. To move forward, you need predictive modeling that identifies interconnected contributors to failure. His back injury example illustrates the method: tracing pain not just to one cause but to ten—posture, distraction, deadlines, shoes, and system load—all combining probabilistically into harm. Mapping these links creates a “fault tree” from which you can see which nodes to fix first.

Applying Probabilistic Thinking

Predictive reliability borrows from physics and engineering. If systems are networks of probabilities, leaders can plan interventions at the most influential junctions. In aviation, implementing positive train control (a sensor system preventing collisions) dramatically improved safety because it addressed multiple fault paths at once. Griffith’s takeaway: strong system-based solutions outperform weak behavioral fixes. It’s easier to design technology that stops texting while driving than to ensure 222 million drivers change habits.

Designing for Capture Opportunities

Fault-tree analysis reveals “capture opportunities”—places where intervention interrupts escalation toward disaster. These could be technical (automatic shutoffs) or procedural (cross-check policies). Recognizing them transforms risk management from policing mistakes to engineering reliability. Over time, data visualization, machine learning, and collaborative reporting systems (like Griffith’s ASAP) refine these predictions further, creating organizations that adapt faster than risk evolves.

Predictive reliability turns hindsight into foresight. By quantifying probability and designing systems that anticipate failure, leaders move beyond prevention—they achieve sustainable reliability over time. For Griffith, that’s the future of leadership: managing the unseen through science and collaboration.


Collaborative Just Culture®: Fairness as a System Design

One of Griffith’s most actionable ideas is the Collaborative Just Culture® model—an organizational framework where fairness and reliability coexist. He argues that most workplaces fail not because employees make mistakes, but because they’re afraid to admit them. CJC replaces fear with structured collaboration, turning justice into a preventive system.

From Punishment to Participation

Traditional HR responses focus on discipline and compliance. Griffith reframes this as inefficiency: punishment silences insight. A collaborative model encourages employees to report risk confidentially, safely, and consistently. Stories from aviation’s ASAP and healthcare’s reporting systems prove that when workers trust their organizations, data quality—and safety—skyrockets. As he notes, “It’s not that people refuse to report; it’s that they don’t trust what will happen afterward.”

Three-Party Collaboration

Central to CJC is the Triad Process: management, human resources, and safety/risk representatives review incidents together until they reach unanimous consensus. This ensures diverse perspectives, checks biases, and builds accountability without hierarchy. It mirrors juries requiring unanimity for justice. This collaboration ensures that every decision about risk and behavior passes fairness and consistency tests, preventing arbitrary punishment.

Evidence-Based and Auditable

Griffith elevates CJC from philosophy to system. It’s documented, monitored, and externally audited—an innovation confirmed by DNV (Det Norske Veritas), one of the world’s top certification bodies. By requiring written standards, measurable outcomes, and transparency, CJC becomes a reproducible model for justice-driven reliability. Employees see fairness not as rhetoric but as verified practice.

Fairness, data, and design combine to make organizations sustainably reliable. CJC proves that workplace justice isn’t ideological—it’s operational science. When employees feel protected, organizations gain visibility into everyday risks, completing the iceberg’s picture beneath the surface.


Collaborative High Reliability®: The Future of Organizational Excellence

In his final chapters, Griffith introduces Collaborative High Reliability® (CHR), the world’s first independently audited model for organizational resilience. Built upon Just Culture and quality management principles, CHR is a blueprint for sustainable excellence—a way to measure, replicate, and certify reliability across industries.

Building the Reliability Management System

CHR begins with two “big rocks”: a Collaborative Just Culture program and a Reliability Management Team (RMT) of subject matter experts. These form the foundation for a larger Reliability Management System (RMS)—an integrated framework that monitors performance, risk, and improvement across key attributes: safety, customer service, privacy, quality, financial responsibility, operational integrity, and equity. Each attribute is documented, monitored, measured, and continuously improved.

Independent Verification: The DNV Partnership

Recognizing the need for transparency, Griffith partnered with DNV to audit and certify CHR. The audits verify proficiency, sustainability, and predictive performance, distinguishing between Tier 1, Tier 2, and Tier 3 levels. Independent verification introduces what he calls “engineering-grade ethics” into management—ensuring that reliability isn’t self-proclaimed but earned through objective evidence.

From Quality to Reliability

CHR evolves Edwards Deming’s mid-century quality principles into twenty-first-century reliability science. Where quality management focused on production consistency, CHR focuses on systemic resilience—performance that endures through change and crisis. Griffith’s message is clear: reliability is the new gold standard for leadership. It’s not about perfection; it’s about creating adaptive systems that keep delivering when things go wrong.

By turning reliability into an auditable, measurable discipline, Collaborative High Reliability® pushes organizations to transform philosophy into practice. It completes Griffith’s vision: seeing risk before it happens, designing systems that prevent it, and building cultures strong enough to sustain it. In doing so, leaders don’t just avoid disaster—they achieve enduring excellence.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.