Autonomy cover

Autonomy

by Lawrence D Burns

Autonomy delves into the evolution of driverless vehicles, revealing the challenges and triumphs of innovators who are reshaping the automotive world. With insights from Silicon Valley and Detroit, this book explores the technological breakthroughs and cultural shifts behind the automation revolution.

Reinventing Mobility for a New Age

How can you reinvent a system that promises freedom but delivers inefficiency, danger and waste? In Autonomy, Lawrence Burns and Christopher Shulgan argue that the automobile—once a symbol of progress—has reached a breaking point. Cars deliver convenience at extraordinary social, economic and environmental cost, and the only solution is to change their DNA through autonomy, electrification and shared use. To understand why, Burns walks you through the evolution of mobility and the extraordinary group of people who built the first truly driverless systems.

The contradiction of automobility

Today’s car culture ties personal freedom to a massively underutilized asset. Your average car sits still ninety-five percent of the time, burns fuel that mostly dissipates as heat, and requires land, debt and maintenance. Burns calls it the “occasional-use imperative”: we buy giant machines to satisfy rare peak needs—like a long road trip—while absorbing constant costs the rest of the year. This mismatch turns freedom into chore and design flaw.

The civic toll is just as severe. Globally, cars kill about 1.3 million people a year and occupy precious city space with parking lots and wide roadways. Fuel dependence ties national security to oil and urban growth to sprawl. Burns’s conclusion: what began as a vehicle for progress now anchors us to outdated inefficiencies.

From critique to creation

The story of Autonomy isn’t an elegy for cars—it’s a chronicle of their rebirth. After the 9/11 attacks, Burns felt personally responsible for reducing oil dependence. As General Motors’ R&D chief, he championed the “Autonomy” and “Hy-wire” concepts, early glimpses of cars driven by electricity and software rather than pistons and gears. Simultaneously, a different stream of innovators emerged from academia and defense research, led by people like Red Whittaker, Chris Urmson and Sebastian Thrun. Their tool wasn’t corporate policy—it was code.

The experiments that changed everything

DARPA’s Grand Challenges, beginning in 2004, were the crucible. They turned autonomy from theory into gear-clanking reality. Carnegie Mellon’s Red Team, with its rugged vehicle Sandstorm, and Stanford’s team, guided by Thrun, battled deserts, sensors and algorithms. When Thrun’s Stanley finished the second Grand Challenge and Whittaker’s Boss won the 2007 Urban Challenge, autonomy ceased being fiction—it became a field-tested discipline. These races trained an entire generation of roboticists who would later build Waymo, Argo AI, Aurora and Uber’s Advanced Technologies Group.

Burns narrates how the contests’ failures—rollovers, fried circuits, late-night welds—proved that innovation depends on testing to failure and learning fast. It’s not glamorous, but it’s the secret behind robust autonomy.

Data, maps, and behavioral intelligence

Once robots proved they could drive in deserts, the challenge moved to cities. There, the problem changed from terrain to behavior. City driving requires knowing not only where things are but what they intend to do. Google’s Chauffeur program, born from Thrun’s Stanford lab and funded by Larry Page and Sergey Brin, layered LIDAR, radar, and rich Street View maps to perceive traffic at centimeter precision. Engineers like Urmson and Dmitri Dolgov built behavioral engines to predict human action—whether a cyclist might swerve or a police officer’s hand motion means “go.”

This behavioral layer turned vehicles from sensors-on-wheels into social participants, negotiating city life much like people do. (In effect, Burns shows, autonomy marries physics with empathy.)

Economics of transformation

Burns and economist Bill Jordan model the financial case: Americans spend roughly $4.5 trillion a year on mobility. Shared, electric, autonomous fleets could cut that to about $0.20 per mile, saving trillions annually while slashing emissions and congestion. In this framing, autonomy is not a gadget but an economic revolution akin to electrification or the Internet’s rise.

People, power, and conflict

Yet every breakthrough carries human drama. Inside Google, the Chauffeur team’s creative tension—Urmson’s disciplined safety ethos versus Levandowski’s brash speed—shaped outcomes. Incentive plans bred rivalries; departures seeded new start-ups and lawsuits. Burns uses this to reveal a truth: technology evolves through people’s ambitions and conflicts as much as through code.

The moral of autonomy

Burns closes with humility. The tragedies of Tesla’s Autopilot and Uber’s Tempe crash prove that progress without safety culture can backfire. Autonomy must earn trust through design, transparency and ethics. When it does, it promises not just driverless cars but a reimagined society—one where mobility is safer, cheaper and cleaner, and every person can summon freedom without owning it. That’s the new frontier Burns asks you to imagine—and to help build.


Breaking the Old Car Paradigm

Burns begins with an unsparing look at the modern car’s inefficiency. A vehicle that weighs a ton moves a driver who weighs less than 200 pounds, converting less than 30 percent of its fuel into motion. The result is a ninety-five-percent-idle, energy-wasting system that clogs roads and finances debt. The book’s first argument is moral and mathematical: when a technology consumes more than it liberates, it must evolve.

Freedom versus friction

Automobility equates ownership with freedom, but it exacts costs you barely see—insurance, registration, fuel, parking, opportunity time. Burns frames this as a systemic design failure, not a cultural quirk. Like the internal combustion engine itself, the market cannot optimize a tool built for scarcity in an age of abundance. (Note: This echoes concepts in The Innovator’s Dilemma—legacy industries rarely reinvent their cost base on their own.)

Social and civic prices

The harm isn’t hidden. Road fatalities are the leading cause of death for people aged 5–29 worldwide. Entire swaths of urban land—up to thirty percent in U.S. cities—serve parking, not people. Burns argues that transforming this system offers a once-in-a-century opportunity: cities could reclaim space for housing and recreation while dramatically reducing global emissions.

Why people act

The movement toward autonomy stems from personal motivation as much as logic. Google’s Larry Page envisioned effortless point-to-point travel after standing in the cold at Michigan bus stops; Burns redirected GM’s R&D after 9/11 underscored America’s oil vulnerability. Visionaries act when inconvenience turns moral. That emotional insight explains why this book treats autonomy not as novelty but as necessity.


From DARPA to Silicon Valley

To build machines that could drive themselves, engineers first had to crash a few. DARPA’s Grand Challenge series became that ritual of failure. In the early 2000s U.S. defense planners sought driverless supply vehicles. Red Whittaker’s Carnegie Mellon team entered the Mojave with Sandstorm, a battered Humvee stuffed with sensors and servers, only to topple after seven miles. Rather than defeat, that failure became a classroom for an entire generation.

Desert lessons

Each Grand Challenge iteration refined both tools and thinking. Chris Urmson’s insight to use pre-mapped routes rather than blind exploration laid the groundwork for modern autonomy’s dependence on high-definition maps. In 2005, Sebastian Thrun’s Stanford team, with the vehicle Stanley, won by fusing LIDAR, cameras, and GPS—creating what Burns calls “the desert’s first algorithmic driver.” The later Urban Challenge in 2007 required vehicles to navigate city-like courses obeying traffic laws. Whittaker’s Boss won, proving robots could negotiate human rules, not just dunes.

The human factor in discovery

Burns humanizes these breakthroughs through profiles: Whittaker’s drill-sergeant rigor, Urmson’s calm discipline, Thrun’s AI curiosity, Levandowski’s hacker impatience. Each personality balanced risk and obsession differently, but together they transformed the field from hobbyist tinkering to credible science. Their mantra—test, break, fix, repeat—became the DNA of future autonomous projects. The Grand Challenges weren’t just races; they were incubators for the minds that would later power Google, Uber, and beyond.


Maps, Sensors and Machine Perception

After DARPA’s deserts, the problem shifted to civilization’s mess: cities full of crosswalks, signs and human chaos. Burns explains that autonomy’s maturity depends not on a single magical sensor but on data fusion and map intelligence. In other words, robots must learn to see, understand and predict.

Mapping the world before driving it

Early teams discovered that pre-mapping roads—recording lanes, curbs, and static landmarks—simplifies perception. Google’s Ground Truth project perfected this by combining Street View imagery with human verification crews, many in Hyderabad, who edited and corrected map data. That blend of automation and human oversight created lane-level accuracy, vital for safe autonomy.

Seeing through multiple eyes

A modern autonomous vehicle has overlapping perception systems: spinning LIDAR to generate 3‑D geometry, cameras for color and signs, radar for adverse weather, and GPS plus IMU for positioning. The challenge is synchronizing all inputs—any misalignment can misplace an object by meters. That’s why development teams spend months calibrating sensors and perfecting time stamps.

Predicting behavior

Perception alone isn’t safety. Google’s Chauffeur team built a behavioral engine that runs constant intent predictions—estimating where cyclists, pedestrians or vehicles will move seconds ahead. When you teach a system that a police officer’s raised palm means “stop” or that a child near a soccer ball might dart into traffic, you move from machine perception to machine empathy. This leap defines modern autonomy’s intelligence layer and distinguishes it from driver assists that merely sense but don’t truly understand context.


Testing, Iteration, and Error Culture

Burns emphasizes one simple but difficult rule: build to fail. From Sandstorm’s rollovers to Boss’s near-collisions in testing, progress required deliberate exposure to failure. Autonomy advances not by avoiding error but by confronting it safely and systematically.

Designing edge cases

The best teams simulate or recreate worst‑case scenarios—dust storms, sensor blinds, hard braking—until systems can recover. At Carnegie Mellon, Whittaker’s engineers once slammed a test vehicle into a concrete barrier to provoke failure data. These brutal tests taught resilience: mount electronics correctly, stabilize sensors, and design fallback behaviors like the "shake‑and‑shimmy"—tiny steering corrections that help a robot regain orientation when confused.

Iterating like software, not hardware

Unlike traditional automaking, which perfects designs before production, autonomy demands software loops—daily testing and update cycles more akin to Silicon Valley than Detroit. Burns’s takeaway for you: if you want reliability in unstructured environments, encourage controlled chaos during development. Every failure is data; every quick fix is a culture of learning made visible.


Inside Google’s Breakthrough

The commercial chapter begins not in a boardroom but with a prank: Anthony Levandowski’s Prius delivering pizza across the Bay Bridge. That television stunt caught Larry Page’s attention and foreshadowed Google’s entry. What followed was one of the decade’s most ambitious research pivots—Chauffeur, the prototype that became Waymo.

Building a testing empire

Google’s engineers—Thrun, Urmson, Levandowski—set concrete goals like “Larry1K”: drive one thousand miles of complex California routes autonomously. The team logged 100,000 miles across highways, coastlines and urban cores, confronting fog, debris and impatient human drivers. Their tools: precise maps, multi‑sensor fusion, and high‑level behavioral prediction. Each successful run earned champagne bottles signed by drivers—small trophies for invisible breakthroughs.

Turning prototypes into proof

When the final Larry1K route wound through San Francisco’s chaotic streets and completed without fault, Google proved autonomous driving wasn’t a lab trick. Carefully timed media demos with journalists and a ride for Burns himself converted skepticism into industry panic. Detroit once mocked; now it copied.

Ethics and culture within Google

Burns, acting as advisor, chronicles both brilliance and friction. Fancy equity plans (the Chauffeur Bonus Plan) magnified rivalries between Urmson’s safety-first discipline and Levandowski’s speed obsession. The resulting rift birthed start-ups and lawsuits that reshaped Silicon Valley mobility. Yet amid tension, one principle endured: the pursuit of a car that drives itself safely everywhere. That ambition kept the mission alive long after the original team splintered.


Designing the Future Vehicle

To make autonomy practical, Burns argues, you must change the car’s physical DNA. The internal-combustion vehicle—thousands of parts and mechanical linkages—is outdated in a world of electric drivetrains and digital controls. The solution is the 'skateboard' architecture: a flat platform embedding batteries or fuel cells, electric motors, and by-wire systems under interchangeable passenger pods.

Skateboard and by‑wire systems

GM’s Autonomy and Hy‑wire concept cars showcased this idea. With steering and brakes managed through software, and modular bodies atop standardized platforms, cars become more like updatable electronic devices than bespoke machines. This modularity simplifies manufacturing and opens new service models such as shared fleets or customizable pods for logistics, commuting, or emergency transport.

Fewer parts, more intelligence

A fuel‑cell electric architecture uses one‑tenth as many parts as a combustion engine. Fewer parts mean cheaper assembly, fewer failure points, and the shift of value from mechanics to software—an industrial upheaval as large as Ford’s original assembly line. Burns connects hardware simplicity to autonomy’s economics: when vehicles last longer and run in fleets for hundreds of thousands of miles, per‑mile costs drop dramatically. The Firefly pod—Google’s small, pedal‑free prototype—embodies that philosophy: design form around function and service, not heritage styling.


The $4 Trillion Disruption

Numbers make the revolution inescapable. Burns and Bill Jordan’s modeling shows how shared, electric, autonomous fleets could slash America’s transportation costs by trillions. At about 20 cents per mile, such systems undercut private car ownership by over 80 percent while improving safety and access.

From cost burden to service efficiency

Americans spend roughly $1.50 per mile on direct and indirect car costs—fuel, depreciation, time lost. Deploying shared autonomous pods in cities like Ann Arbor or Manhattan could meet mobility demand with one-sixth as many vehicles operating at multiple times the daily utilization rate. Freed land, lower emissions, and reduced traffic injuries amplify those savings across society.

Winners and dislocations

Burns doesn’t gloss over disruption. Millions of driving, mechanical and fueling jobs will change or vanish. But the net benefits—cheaper transport, new service industries, cleaner air—mirror previous industrial revolutions. The challenge for you and policymakers is managing transition humanely while embracing efficiency gains that are mathematically overwhelming.


Crises, Ethics, and the Next Era

Progress often invites premature imitation. Tesla’s Autopilot and Uber’s self-driving programs rushed semi-autonomous systems to the road before safety and user understanding caught up. Fatal crashes—Joshua Brown’s in 2016, Elaine Herzberg’s in 2018—revealed how fragile public trust can be when marketing exceeds capability. Burns dissects these cases to illustrate the human-factors challenge at autonomy’s core.

The human fallback trap

Semi-autonomous systems that require human monitoring fail because people aren’t reliable monitors. Internal Google tests warned of this early—a driver once slept for twenty‑seven minutes during testing. When reengagement takes more than a few seconds, accidents become inevitable. Hence the Chauffeur team’s pivot away from driver-assist toward full autonomy, removing the human fallback.

Ethical design and communication

Naming a system “Autopilot” or disabling emergency braking for comfort are not just technical decisions—they are moral ones. Burns urges you to align design, language and policy with reality. Trust grows from transparency, conservative engineering and humility, not hype. The lesson of these crises is that safety culture—not speed to market—determines who earns the right to redefine mobility’s future.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.