Start at the End cover

Start at the End

by Matt Wallaert

Start at the End provides a groundbreaking approach to product design by focusing on desired consumer behavior and working backward. Learn to craft impactful products using behavioral science, map consumer influences, and ensure ethical integrity throughout the process.

Start at the End: Designing a World Through Behavior Change

What if you thought about every product, policy, or idea you designed not as an object, but as a way of changing behavior? That’s the question behavioral scientist Matt Wallaert asks in Start at the End, a manifesto-meets-handbook on how to make behavior change the central purpose of design. Wallaert argues that nearly all human creation—from sidewalks to software—is an attempt to shape how people act, yet most creators and organizations fail to acknowledge this truth explicitly. As a result, we make products that look good or sound visionary but don’t deliver the behavioral outcomes we actually want.

The core argument: everything we create is an intervention that alters human behavior, whether or not we intend it to. The problem is that most organizations, especially in business, start with ideas or aesthetics instead of the behavior they want to create. Wallaert’s solution is simple but radical: start at the end. Define the behavior you want to see first, then work backward through a systematic scientific process to design interventions that make that behavior more likely. This becomes the Intervention Design Process (IDP)—a repeatable, evidence-based framework for producing predictable behavior change.

The Case for Behavior as the True Outcome

Humans, Wallaert says, are born behavioral scientists. From infancy—when we cry to get food—we are already experimenting with cause and effect. Every adult interaction, workplace policy, or marketing campaign functions the same way: a test of which conditions make specific behaviors more or less likely. Yet unlike our personal experimentation, most professional creativity ignores the science of behavior entirely. In the corporate world, leadership often functions like an episode of Mad Men: powerful people throwing around ideas, then rationalizing them after the fact with glossy PowerPoints and vague mission statements about disruption or innovation. Instead of focusing on outcomes, they fetishize process or appearance.

The result is enormous inefficiency—Wallaert cites America’s $220 billion advertising industry as an expensive compensatory mechanism for products that weren’t designed with behavior in mind. If creators built things explicitly to change behavior in desired ways, we wouldn’t need to shout about them afterward.

The Counterfactual World

The goal, Wallaert explains, is to bridge what psychologists call the counterfactual world—the world that doesn’t yet exist but could—with our current reality. For any given issue, there’s a “world as it is” and a “world as we want it to be.” Behavioral design asks: what’s stopping us from getting there, and what could we create to make that new world real? This is both a scientific and ethical pursuit. Anchored in social psychology, it insists that systematic experimentation—not guesswork—drives meaningful and measurable change.

To operationalize this, the IDP invites you to move step by step from insight to scale. You begin by identifying and validating a potential insight (an unexpected gap between what is and what could be), creating a behavioral statement that articulates your target outcome, mapping the pressures influencing that behavior, designing and selecting interventions, testing and scaling them ethically, and monitoring their results continuously. The process eliminates the guesswork of design-by-opinion and replaces it with structured iteration.

The Forces That Shape What We Do

At the heart of behavior are two opposing forces: promoting pressures and inhibiting pressures. Promoting pressures make a behavior more likely (“I’m hungry, so I buy food”), while inhibiting pressures make it less likely (“I’m broke, so I skip lunch”). Every action is the product of their balance. Wallaert’s genius lies in showing that most designers fixate on promoting pressures—creating shiny ads or features to add motivation—while neglecting the power of removing barriers. Often, the easiest wins come from reducing inhibiting pressures instead of boosting motivation. His example of Uber illustrates this perfectly: the company didn’t make people want rides more—it simply removed the friction of finding, paying, and trusting drivers.

This dual-force framework underpins all of behavior design, whether you’re building software, managing teams, or nudging public health. Recognizing both sides of the equation helps you design interventions that truly work—and work for people’s real-world constraints.

Ethics, Identity, and Impact

Because every intervention intentionally alters what people do, ethical scrutiny is essential. Wallaert insists that ethical design means aligning your interventions with people’s stated motivations, not manipulating them into actions that serve only your interests. Transparent, population-centric design ensures that behavior change empowers rather than exploits. He condemns the “dark side” of behavioral science, such as Uber’s manipulation of drivers to stay logged in longer or Facebook’s secret emotional manipulation studies. In contrast, his approach demands consent, clarity, and an outcome-focused morality: the behavior must serve both designer and participant.

From Advertising to Allyship

Beyond business, Wallaert imagines behavior change as a civic and ethical revolution. He envisions a world where social problems—like racism, sexism, environmental damage, or poverty—are addressed not just through awareness but through design. If sidewalk layouts, digital forms, and product experiences were all built to gently push better collective behavior, large-scale change could emerge organically. He calls this “guerrilla warfare” for good: a democratized behavioral science that allows small, nimble actors to outthink big companies’ brute-force budgets.

The takeaway: behavior change is not just a marketing trick or management fad. It’s the cornerstone of meaningful creation. By starting at the end—defining the behavior you want to see—then systematically designing, validating, and scaling interventions, you can build not just better products or companies but a better world. Wallaert’s message is liberating: you don’t need a PhD to change behavior, just process, persistence, and purpose.


The Intervention Design Process (IDP)

At the center of Start at the End is the Intervention Design Process (IDP)—Wallaert’s playbook for turning behavioral science into everyday practice. It’s an iterative, structured system that helps you identify the behavior you want, understand what drives or inhibits it, and then design and test interventions that work in the real world. Each stage builds on the last, transforming loose ideas into validated actions that scale.

1. Start with a Potential Insight

Everything begins with an insight—a clue that the world isn’t functioning as it could. Insights come from four sources: quantitative data (patterns and anomalies), qualitative research (observation and interviews), apocryphal corporate wisdom (what “everyone” believes), and external knowledge (academic papers or other industries). Great interventions often start with a surprising discrepancy between what people believe and what they actually do, like the Frito-Lay janitor Richard Montañez noticing that no Cheetos flavor catered to Latinx consumers—leading to Flamin’ Hot Cheetos.

2. Validate Your Insight

Validation is the antidote to assumption. Before acting, you must test whether your insight is real, using multiple kinds of evidence (called convergent validity). In the Bing in the Classroom case study, Wallaert confirmed teachers—not students—were the real barriers to search use in schools by combining quantitative data with classroom observations. This blend of data, empathy, and skepticism keeps teams honest and prevents Mad-Men-style leaps of faith.

3. Write a Behavioral Statement

A behavioral statement articulates exactly what you want to happen. It follows this formula: “When [population] wants to [motivation], and they [limitations], they will [behavior] (as measured by [data]).” For example: “When students have a curiosity question and are near a computer, they’ll use Bing to answer it.” This approach forces clarity and accountability—every word defines your outcome and your boundaries.

4. Map the Pressures

With your behavior defined, you identify the promoting pressures (forces that encourage behavior) and inhibiting pressures (forces that discourage it). This mapping distinguishes between what you can amplify and what you should remove. For Bing, curiosity was already abundant (a promoting pressure), while barriers like teacher fear of unsafe content and classroom chaos were inhibiting. Only by seeing both sides could the team design relevant interventions.

5. Design and Select Interventions

Here creativity meets science. You brainstorm many possible interventions—each explicitly mapped to the pressures you identified. Then, through selection, you choose a few to pilot. Wallaert emphasizes optimum distinctiveness: pilots should cover a range of ideas that don’t overlap too much, to maximize learning. He illustrates this in Clover Health’s flu-shot program, which tested faith-based clinics, personalized letters, and community messaging—different approaches to the same outcome.

6. Ethical Check

Before launching, you scrutinize whether your intervention respects participants’ motivations and autonomy. Wallaert defines unethical design as any intervention that either violates someone’s motivations or creates more harm to other motivations than benefits to the target one. Facebook’s and Uber’s manipulative experiments serve as cautionary tales, reminding readers that transparency and responsibility are non-negotiable in ethical behavior change.

7. Pilot, Test, Scale, and Monitor

You don’t ship finished products—you pilot tests. A pilot is a small, rough experiment designed to see if the intervention changes behavior at all. If it works, you run a larger, more operationally clean test to confirm impact and feasibility. If that works, you scale—with continuous monitoring to track effect over time. Wallaert’s mantra: “Slow is smooth, smooth is fast.” You learn through small, validated steps, avoiding large, costly mistakes.

Across industries—from healthcare to tech—the IDP has one mission: to replace intuition with intention. By following it, you don’t just build things that look good; you build things that work because they deliberately and ethically change behaviors in measurable ways.


Finding and Validating Insights

Behavioral design begins with curiosity: noticing something about the world that doesn’t quite fit. Wallaert calls this a potential insight—an observation about the space between how things are and how they could be. But insights aren’t truths; they’re hypotheses to be tested. The work lies in validating them through diverse evidence until you can confidently say: “Yes, this gap is real, and it’s worth closing.”

Four Kinds of Insights

Quantitative insights emerge from data—anomalies, correlations, or outliers that hint at deeper behavioral patterns (for example, noticing that kids made fewer than one online query per day). Qualitative insights come from observation and interviews; they reveal why people act as they do. Apocryphal insights are the informal “everybody knows” beliefs inside organizations that may or may not be true. External insights come from research or other fields, such as academic papers, cross-industry analogies, or talking to grad students with fresh ideas.

The Role of Convergent Validation

Wallaert warns that humans naturally seek to confirm their assumptions—a cognitive bias that leads us to cherry-pick favorable evidence. To resist it, behavioral scientists use convergent validity: multiple independent lines of evidence that converge on the same conclusion. The Bing study, for example, relied not only on usage data but also direct observation. When quantitative data and qualitative observation agree, reliability soars. Wallaert likens this to a table needing multiple legs: the wider apart they are, the more stable your conclusion.

Horizontal Insight Generation

Great insights come from diversity. Frito-Lay’s company culture allowed a janitor like Richard Montañez to pitch directly to the CEO, bypassing hierarchy and creating an innovation that revolutionized snack food. Barack Obama’s White House used citizen letters for nationwide insight gathering. In contrast, companies that suppress cross-level dialogue stagnate. Insight generation thrives in transparent, inclusive systems that let anyone surface an anomaly or idea worth testing.

When you democratize insight finding—through open data, staff brainstorming, or direct access to users—you form a wider funnel of opportunities for change. The joy of behavioral science, Wallaert says, is that every observation hides a potential new universe where things work better. Your mission is to validate enough of them to start building that world.


Pressures: The Hidden Forces Behind Behavior

Once you know the behavior you want, you have to understand why it isn’t happening yet. Wallaert’s tool for this is pressure mapping: charting the promoting pressures (reasons people do something) and inhibiting pressures (reasons they don’t). Like opposing arrows, the net balance determines what we actually do. Though deceptively simple, this model revolutionizes design thinking.

Promoting vs. Inhibiting

Promoting pressures include desires, social norms, rewards, or convenience. Inhibiting pressures include fear, cost, effort, or lack of knowledge. You can change behavior by increasing promoters or reducing inhibitors—but most organizations only focus on promoters. Wallaert calls this the “Mad Men mistake.” Ads keep shouting louder (“Be excited!”) instead of removing friction (“Make it easier!”). Uber thrived not by inspiring wanderlust but by reducing inhibition: eliminating payment hassle, uncertainty, and waiting.

Everyday Example: M&M’s

Wallaert demonstrates this with M&M’s. Promoting pressures: taste, color variety, nostalgia, and fun branding. Inhibiting pressures: availability, health guilt, and context (you don’t serve M&M’s at a fancy dinner). Many companies chase new flavors (more promoters) while ignoring the possibility of making M&M’s easier to access (fewer inhibitors)—like Amazon auto-ship or vending ubiquity. The insight: reducing inhibiting pressures often yields bigger, longer-lasting change than adding promoting ones.

Focusing on Both Sides

Balanced mapping forces nuanced design. Clover Health, for instance, discovered that Black seniors’ flu shot avoidance came not from “lack of motivation” but from deep institutional distrust—an inhibiting pressure rooted in medical racism. Interventions that built trust (through faith-based engagement) worked far better than motivational messaging alone. Similarly, Microsoft learned that it wasn’t student apathy hindering Bing use—it was teacher anxiety about safety and privacy.

When you stop blaming people’s motivation and start fixing the world around them, behavior change becomes achievable. The simple act of listing both pressure types reveals leverage points that marketing slogans never touch. The best designs make the right behavior the easy one.


Ethical Science of Behavior Change

Every intervention changes how someone acts. That power demands guardrails. Wallaert devotes an entire chapter to the ethics of behavior design, laying out a practical code for ensuring your interventions respect autonomy and honesty. He argues that all behavior change is moral only when it aligns with people’s own motivations and transparently shares responsibility for outcomes.

The Intention-Action vs. Intention-Goal Gap

The first ethical test concerns motivation alignment. The intention-action gap (I want to act but don’t) is safe to address—for example, you help people exercise by reducing friction. The intention-goal gap (I want the result but not the method) is riskier: convincing someone to act against their stated preference (“You want to stay healthy, so get a flu shot, whether or not you want vaccines”). The ethical rule: if the behavior doesn’t honor any genuine motivation of the population, it’s unethical.

The Test of Cost and Benefit

Even ethically motivated behaviors require moral accounting. An intervention becomes unethical when its harms to other motivations outweigh its benefits. Wallaert mocks Uber’s defense for overworking drivers as “They can stop whenever they want”: free will isn’t an excuse for manipulation. Facebook’s emotional contagion study likewise failed because it caused harm without informed consent. Ethical interventionists must be willing to publicize and stand behind their actions.

Transparency and Shared Accountability

Wallaert’s final ethical rule: your work should withstand scrutiny. Publish results, document failures, and invite external review. Clover Health’s behavioral lab posts both successful and null results publicly, practicing radical transparency. In an industry steeped in secrecy, that’s revolutionary. Responsible behavior science demands not only “doing good” but also showing your work.

By following these principles—alignment, balance, transparency—you turn behavioral change from manipulation into stewardship. The goal isn’t to trick people into acting differently; it’s to help them act more easily on what they already want.


Testing, Scaling, and Continuous Learning

Designing a great intervention is only step one. The hard part is proving it works and deciding whether it’s worth scaling. Wallaert outlines a disciplined cycle—pilot, test, scale, monitor—that separates scientific rigor from entrepreneurial hype. The process ensures learning at small stakes before committing big resources.

Pilots: Operationally Dirty, Intentionally Small

A pilot is a fast, minimal version of an intervention run with a small, controlled group. Its goal isn’t polish but proof: does it change behavior at all? Wallaert runs “operationally dirty” pilots to minimize time, cost, and emotional investment. For example, his early Bing lesson plans were hand-written rough drafts designed to test engagement quickly. Pilots expose weak ideas cheaply and help teams embrace failure as learning, not embarrassment.

Tests: The Worth-It Phase

When a pilot shows promise, you scale up slightly into a test—a more rigorous, operational version that assesses both impact size and feasibility. It’s here you measure effect size (the magnitude of change) and p-value (how statistically confident you are it’s real). Unlike academia’s obsession with p<0.05, Wallaert accepts higher uncertainty thresholds (like p=0.2) for low-risk behavioral pilots—because in applied science, small mistakes beat big ones.

Scaling and Monitoring: Is the Juice Worth the Squeeze?

After testing, you decide whether to scale by writing a “juice/squeeze” summary: “We’re 90% confident that [intervention] increases [behavior] by X% at Y cost.” This structured statement clarifies tradeoffs so resources flow to interventions with high return. But scaling isn’t a finish line—it’s the beginning of continuous monitoring. Every scaled intervention is revalidated over time to detect decay, competition, or changed context. Wallaert calls this the defense against the “piranha effect,” where too many overlapping interventions eat away each other’s attention and effectiveness.

In short, testing isn’t about proving you’re right; it’s about learning faster, cheaper, and more honestly. Continuous measurement keeps science humble—and that humility keeps interventions alive and adaptive in a changing world.


The Cognitive Economics of Attention

If time and money shape our choices, cognitive attention is the ultimate currency. Wallaert’s chapter on “Optimum Cognition” reframes the brain as a finite resource allocator: everything we do competes for mental energy. That struggle defines modern behavior change—and why many products feel exhausting instead of empowering.

The Scarcity of Mental Bandwidth

As we age and multitask, our cognitive “pie” gets smaller. Under stress or sleep deprivation, we rely on shortcuts—biases, heuristics, and defaults. Smart design either lightens that load or guides it deliberately. Uber wins because it eliminates planning strain—no maps, no cash, just tap and go. Blue Apron stumbled because it promised ease but delivered complexity: recipes that required too much cognitive spend.

Automation vs. Curation

People differ in where they want to spend mental energy. Some value automation (removing cognitive effort, like Wallaert’s self-replenishing wardrobe), while others value curation (investing effort in meaningful preference, like custom-building PCs). Effective systems offer both paths—one automatic, one intentional—so users can allocate attention to what matters most.

Designing for the Right Cognitive Load

Reducing cognitive strain isn’t always good; sometimes effort creates satisfaction (we enjoy choosing ingredients because effort signals value). The trick is finding the cognitive “Goldilocks zone”: just enough thought to engage, not enough to exhaust. Behavioral scientists achieve this by tailoring interventions to context—considering fatigue, environment, and trade-offs—and by analyzing what decisions users wish were easier versus worth the mental cost.

In a noisy world, Wallaert reminds designers to respect cognitive bandwidth as a shared, limited resource. Ethical behavioral design doesn’t steal attention—it restores it, helping people spend their minds where they truly want to.


Identity, Uniqueness, and Belonging

At the core of human motivation lies a paradox: we crave to be unique and to belong at the same time. Wallaert calls this the “snowflake-in-a-blizzard” problem. Successful interventions honor both needs—making people feel distinctive enough to matter and connected enough to be safe.

The Push-Pull of Social Identity

Social identity theory (from social psychologists like Tajfel and Turner) argues that our behavior is shaped by in-groups (who we identify with) and out-groups (who we distinguish ourselves from). Wallaert adapts this into a matrix: in-group promoting, in-group inhibiting, out-group promoting, out-group inhibiting. Each combination helps explain why people adopt or reject behaviors.

For example, wearing cowboy boots may express belonging to the “country boy” in-group (promoting), while rejecting suits opposes the “city elite” out-group (also promoting). Identity pressures thus serve as both carrots and sticks that guide behavioral conformity.

Culture, Class, and Context

Belonging vs. uniqueness varies by context. Hazel Markus’s research shows Western cultures prize individuality, while Eastern and lower-socioeconomic cultures value belonging. A wealthy person might be furious if a neighbor buys the same luxury car (threatening uniqueness); a working-class person might start a car club (reinforcing belonging). Context-sensitive interventions that account for this duality resonate better with different groups.

Stable vs. Unstable Preferences

Wallaert differentiates between stable likers/dislikers (deep, identity-based) and unstable ones (trend-based). Stable fans can become invaluable testers or advocates (like Microsoft’s “Insiders” program). Unstable likers act as recruiters—they share and amplify but are fickle. Stable dislikers can reveal inhibiting pressures; unstable dislikers can become vocal critics, as seen in Target’s backlash over removing gendered toy aisles. Understanding these populations lets you design tailored interventions for each: conversation for the stable, containment or conversion for the unstable.

Ultimately, designing for both uniqueness and belonging isn’t about manipulating identity—it’s about respecting it. When people see themselves in what you create and feel part of a tribe that reflects their values, their behavior naturally aligns. That’s not persuasion—it’s empowerment.


The Power of Inhibiting Pressures

While most designers chase motivation, Wallaert falls in love with inhibition. His chapter “Special Factors of Inhibiting Pressures” argues that constraints often hold the strongest keys to change. Removing barriers—rather than adding motivations—produces faster, fairer, and longer-lasting results.

Why Inhibitions Matter More

Because corporate cultures bias toward promoting pressures (“Let’s advertise more!”), inhibiting pressures remain underexplored. That makes them fertile ground for innovation. Uber’s digital payment is a hallmark example: instead of trying to make taxis cooler, it removed the universally disliked task of paying. Suddenly, the same core behavior (getting rides) exploded in frequency.

Homogeneity, Longevity, and Predictability

Inhibiting pressures have special advantages. They affect everyone similarly (everyone dislikes cost or delay), they endure longer than motivations (fashion fades; friction feels eternal), and they are measurable (dollars, minutes, steps). That predictability allows designers to forecast outcomes more reliably than trying to evoke temporary excitement or novelty.

The Penny Gap and Prospect Theory

Drawing on Nobel laureate Daniel Kahneman’s prospect theory, Wallaert notes that removing a cost entirely triggers disproportionate behavioral change. The “penny gap” shows that the difference between free and one cent is enormous—because losses hurt more than equivalent gains feel good. Eliminate a barrier completely, and you unleash exponential results. That’s why making something free (free shipping, free trial, free sample) often beats any marketing campaign.

Inhibiting pressures are the quiet, universal equalizers of behavior design. Remove them, and you unlock the latent motivations that were already there—no persuasion required.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.