Map It cover

Map It

by Cathy Moore

Map It is your go-to guide for creating effective business training programs that deliver measurable results. Uncover the root causes of business challenges, set clear goals, and design engaging learning experiences that drive success. Say goodbye to ineffective training and hello to impactful solutions.

From Information to Impact: The Core Shift

What if your job as a learning designer wasn’t to deliver content, but to improve performance? That’s the radical premise Cathy Moore advances in her influential framework, often called Action Mapping. The book dismantles the traditional school model of training — the familiar cycle of telling, then testing — and replaces it with a performance-first mindset. Instead of creating courses that simply transmit knowledge, Moore challenges you to identify what people must do differently on the job to move a measurable business metric.

Why the old model fails

Moore begins with a story of two designers, Tina and Anna. Tina builds conventional slide-based courses, piling quizzes and videos into an attractive package. Her learners pass tests but change nothing about how they work. Anna, by contrast, uses the action-mapping approach. She starts by asking, “What problem are we solving?” and “What must people do differently?” Anna designs realistic decision activities that simulate the job. Through practice and feedback, her learners make better choices — and the organization’s metrics improve.

The problem, Moore argues, is that most workplace learning lives in Testland — a gray world where success means passing recall tests rather than performing under pressure. In the real world, success is measured not by how much learners remember but by how effectively they act amid constraints, distractions, and imperfect information. If the goal is to change behavior, the solution must reflect that reality.

The logic of Action Mapping

Action mapping offers a simple but rigorous workflow. You begin with a measurable business goal — a specific change in performance linked to a metric your organization already tracks. Then, you identify the observable actions people must take to achieve that goal. Next, you diagnose why those actions aren’t happening, considering factors beyond skill or knowledge (like missing tools or conflicting incentives). Only after that do you design the smallest, tightest set of activities that help people practice the right decisions in realistic contexts.

Core principle

“Decide to improve business performance. Focus on what people need to do.” That single shift turns you from a content provider into a performance consultant.

Moving from order-taker to performance partner

Moore addresses a practical challenge: stakeholders often arrive asking for a course because that’s what they know. Instead of rejecting them, you redirect the conversation by asking questions: “What business result are we trying to improve?” “What does good performance look like?” “How would we know it’s working?” By guiding clients through these questions, you uncover root causes that may have nothing to do with training — and often find faster, cheaper fixes such as revising job aids or adjusting workspace layouts. (Harold’s hospital example shows how moving sharps containers, not teaching more rules, reduced needle injuries.)

When you apply this approach, you’re no longer producing content for its own sake. You’re building interventions — which might include practice scenarios, job aids, coaching scripts, workflow tweaks — all tied explicitly to measurable outcomes. This outcome-driven logic reframes your identity. You stop being a slide-maker and start acting as a strategic partner who affects key metrics.

How the book unfolds

Across its chapters, Moore walks you through every phase of the process: writing measurable business goals, defining observable job actions, diagnosing barriers, designing meaningful practice, and producing and evaluating results. You’ll meet recurring archetypes — Anna Action von Mapp, Bob the sales manager, Grace the data processor — who reveal how real problems can be solved faster when we question the assumption that training is always the answer.

You also explore deeper design lessons: how to interview subject‑matter experts for stories and consequences rather than rules; how to create scenario-based activities that show, not tell; how to align timing, format, and reinforcement to real work rhythms; and how to prototype and iterate quickly before committing to production. Each stage builds a habit of discipline — always linking content back to why this matters for the business.

The result, Moore promises, is that your learning projects will move from being educational artifacts to business tools: clear, streamlined, and performance‑centered. This approach doesn’t just improve learning outcomes; it elevates your professional credibility. You become the person who improves the numbers that matter, not just fills the LMS with activity completions.

A practical philosophy

Moore’s philosophy resonates with performance consulting pioneers like Robert Mager and performance support advocates such as Jane Bozarth. Yet her contribution is distinct: she gives you a concrete visual and conversational method to practice those ideas. Action mapping saves time, avoids unnecessary courses, and produces learning that participants experience as worth their time because it mirrors their real decisions.

In essence, the book reframes learning design as organizational problem-solving. The measure of success stops being clicks, smile sheets, or test scores and becomes business impact. Once you internalize that shift, every request — every “we need a course” email — turns into an opportunity to ask better questions, uncover real causes, and deliver results that last.


Start with Measurable Business Goals

The first practical step in Action Mapping is setting a business goal that truly matters. Cathy Moore insists that a project must begin with a metric your organization already tracks — something that reveals real-world performance, such as sales conversions, error rates, or time to resolution. The goal ties behavior directly to business results, not to abstract learning outcomes like 'understand procedures.'

Structure of a strong goal

Use Moore’s template: “A measure we already use will increase or decrease X% by Date as people DO something.” This phrasing demands clarity. For example, instead of saying, “Salespeople will know product features,” you write, “Mega widget sales will increase 5% by Q4 as salespeople identify the best widget for each customer.” That single sentence tells you whom you’re designing for, what they must do, and how you’ll know if it worked.

Why linking to real metrics matters

Anchoring your project to a real business measure achieves two things: it gains stakeholder commitment (because the goal is visible and meaningful), and it gives you a yardstick for evaluation. When you finish, you can show concrete improvement. Without that connection, your work risks drifting into 'course-completion' vanity metrics — data that looks tidy but says nothing about impact.

Examples in practice

In one story, Bob begins with a weak goal — “salespeople will know all features.” After rewriting it with Moore’s help, his goal becomes “increase high-end widget sales by 5%.” Investigating that shift uncovers multiple root causes: poorly aligned commissions, weak questioning skills, and a confusing product catalog. The solution becomes a mix of process changes and realistic practice activities. Both sales results and morale rise because the training directly aligns with real pressures.

Similarly, Grace’s TPS record error problem isn’t solved through a long course. Once reframed as “Reduce TPS rejection rate to 5% within six months,” the solution appears obvious: a short software popup and ten-minute activity fix the issue across thousands of employees.

Getting stakeholders to agree

Often your hardest job is selling the idea of measurable goals. Stakeholders may claim “we can’t measure that.” Moore advises you to pair aspirational and practical goals: a broader strategic aim (e.g., reduce turnover) paired with measurable driver metrics (e.g., increase feedback frequency). If no data exist, propose light-touch measurement — pre/post manager ratings, quick pulse checks, or simple process counts — so progress remains visible.

The message is simple: clear goals create disciplined design. Every later decision — what to include, what to cut, what kind of practice to develop — flows from the purpose you define at the start. This clarity transforms your work from training fulfillment to business partnership.


Define and Prioritize Job Actions

Once the goal is clear, you list what people must do to achieve it. Moore urges you to identify observable behaviors — actions you could record someone performing. These aren’t abstract skills or facts; they’re concrete acts like “de-escalate an angry customer call” or “enter XR codes correctly.” Observable actions allow you to diagnose real barriers and design meaningful practice.

From knowledge to observable behavior

If your SME says “They must understand our policies,” ask “If I watched them at work, what would I see?” This question turns theory into filmable action. You might transform “understand safety protocols” into “attach sharps container at bedside before procedure.” That phrasing instantly highlights environment, skill, or incentive issues rather than memorization gaps.

Breaking down complex actions

Complex jobs like construction finishing or customer consultation should be decomposed into smaller steps. “Follow the procedure” is too vague for design or analysis. Instead, list essentials: mix proportions, confirm surface moisture, smooth slabs. Breaking down steps makes root-cause analysis and scenario design more precise.

Model-based standards

When actions rely on interpersonal skill — like “be supportive” or “coach effectively” — you define explicit models or scripts (e.g., “Apply the Supportive Conversation model”). Models create observable criteria for good performance and make practice design concrete.

Prioritizing the few that matter

Instead of cataloging every possible task, Moore tells you to focus on high-impact actions — those that most affect the business goal or cause the most errors. With limited resources, designing strong practice for five to ten actions yields far more impact than spreading effort thin across everything.

Once your actions are observable, specific, and prioritized, your map becomes a roadmap to performance. You have tangible handles for analysis and practice — the foundation for all later design choices.


Diagnose the Real Causes

The moment you’ve listed target actions, resist jumping to solutions. Instead, ask: “Why aren’t people doing this now?” Moore’s diagnostic flowchart forces you to examine four categories — Environment, Skill, Knowledge, and Motivation — in that order. This disciplined thinking prevents wasteful training projects built on assumptions about ignorance.

The four diagnostic layers

  • Environment: Are tools, processes, or incentives hindering success?
  • Skill: Do people have the procedural fluency to perform?
  • Knowledge: Are rules or facts missing — or can they be looked up?
  • Motivation: Do rewards, culture, or meaning reinforce the behavior?

Environment first, always

Most performance gaps live in the environment. The famous “sharps safety” example proves it: nurses recap needles because sharps containers are too far away, not because they forgot the rule. Fix the bracket location and you solve the problem — no course needed. Starting with the environment makes you an ally to operations, not a trainer in isolation.

Root-cause attacks

Use iterative “why” questioning to uncover system issues. When reporting falsification revealed deeper quota conflicts, Moore showed how organizational metrics themselves often built the wrong behavior. Real change meant altering incentives, not adding more training modules.

This diagnostic habit transforms you from an information pusher to a detective. Once you know the true barriers, you can choose appropriate fixes: job aids for simple lookups, targeted practice for skill gaps, policy changes for environmental barriers, and coaching or leadership intervention for motivation.


Design Practice That Mirrors the Job

Once causes are known, you design activities that let learners decide — not recite. Moore’s mantra: learners should practice making realistic choices and experience the consequences. This is where the 'show, don’t tell' rule defines modern learning design.

What makes an effective activity

  • Decision: learners must choose an action, not recall trivia.
  • Context: embed realistic names, places, and constraints.
  • Realism: mirror the actual job environment.
  • Consequences: show outcomes so choices matter.

Scenarios, not slides

Cathy Moore popularized the mini-scenario as the atomic unit of learning. A mini-scenario poses one meaningful choice and shows what happens next. You can string several into a branching scenario to model complex judgment. Even multiple-choice questions become design tools: drafting a strong stem forces you to write contextual details and plausible mistakes, making even a quiz question a micro‑simulation of thought.

How to interview SMEs for stories

Good practice design depends on good source material. Don’t ask experts “What should people know?” Ask “Tell me about a time someone made a mistake.” Dig into what happened and why. Moore’s example from Weeber Widgets — interviewing Arno about how salespeople respond to technical objections — yielded authentic distractors that made practice realistic. SME interviews become story‑hunting sessions that expose real thinking, not lists of rules.

Feedback that shows, not tells

Feedback should continue the story before explaining. Instead of “Incorrect,” show the consequence. When the salesperson pushes back defensively, show how the customer shuts down. Only after showing should you offer optional “Why?” feedback or link to a job aid. This approach transforms feedback into experience rather than evaluation.

When your practice looks and feels like the job, learning sticks. Learners engage emotionally and cognitively, building judgment and confidence that no slide could achieve.


Make Help Available, Not Forced

Even the best-designed activities fail if you drown learners in pre‑teaching. Moore advocates a stream-of-activities model — an ongoing flow of realistic challenges, each with optional help and embedded resources. Learners pull what they need when they need it instead of being force-fed information upfront.

Pull, don't push

In the traditional tell‑then‑test cycle, the course lectures first and tests later. Moore flips that order. Learners face a challenge first (test‑then‑tell or pure stream), get feedback, and explore optional references. This respects their intelligence and focuses attention on relevance. Only you as designer track what’s optional or required; from their perspective, everything feels fluid and purposeful.

Categorize information

Decide what must be memorized versus what can be looked up. For instance, a complex database code sheet should live as a job aid; a conversational model must be internalized through practice. Embedding job aids and worked examples within activities — such as photos of hospital signage or step-by-step demonstrations — connects learning directly to workplace tools.

Scaffold and fade

Start with generous hints, then gradually remove them as competence grows. Easy early tasks build confidence; later challenges raise difficulty and autonomy. This scaffolding mimics real growth and keeps frustration low while maximizing transfer.

When you make information available rather than forced, you build learners’ curiosity and independence. They learn to find what they need — the very skill that sustains performance long after training ends.


Prototype, Deliver, and Evaluate

After mapping, diagnosing, and designing, Moore brings you to production. Her rule: prototype early, and iterate fast. You don’t need a polished package to validate design. In fact, rough prototypes — grayscale screens, text-only branches, printed sheets — keep stakeholders focused on logic, not aesthetics.

Iterate with purpose

Build a small interactive sample. Let SMEs test accuracy, then let real learners try it. Observe misunderstanding and adjust content or context before expanding. Branching scenarios evolve through four cycles: prototype key decisions, test flow, add dialogue, and finally polish selected nodes. This agile rhythm averts large-scale rework and surfaces hidden confusions.

Integration with operations

Because many interventions include non‑training solutions, coordinate timing with environmental fixes. For example, launch a safety refresher only after new containers arrive or incentive structures shift. Training succeeds only when environment and reinforcement systems align.

Evaluation for real impact

Evaluation follows Brinkerhoff’s Success Case Method: after rollout, survey participants, then interview the best and worst performers. Discover what worked and what didn’t. These stories complement numbers, giving qualitative insight. For shorter timelines, track closer indicators — task-specific performance logs, help-desk volume, or LMS behavior analytics.

Assessments themselves can reuse your realistic activities: simply remove feedback and score decisions by outcome quality. That turns evaluation into another measure of authentic performance, closing the loop between design and results.

By producing in small batches, testing continually, and measuring against the original goal, you build credibility and save resources. You don’t just finish projects — you prove value.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.