Lululemon And The Future Of Technical Apparel cover

Lululemon And The Future Of Technical Apparel

by Chip Wilson

A depiction of the creation of the Canadian multinational athletic apparel retailer Lululemon.

The Heart–and–Chip Alliance

How can you gain speed, scale, and precision without losing judgment, empathy, and values? In The Heart and the Chip, Daniela Rus argues that robots are tools that amplify your humanity when you design them as partners, not replacements. She contends that pairing the heart (human goals, ethics, creativity) with the chip (sensing, learning, actuation) produces better outcomes—safer surgeries, faster emergency delivery, kinder workplaces—so long as you build for responsibility from the start.

In this guide, you’ll discover the core loop behind useful robots—sense, think, act—and how co-designing bodies and brains turns vision into dependable machines. You’ll then learn why touch and manipulation remain robotics’ hardest frontier, and how learning in simulation plus explainable models (including liquid networks) creates safer autonomy. Finally, you’ll see how soft exoskeletons, drones, and modular swarms extend your reach and buy back time, and why ethics, certification, and a learning workforce keep this future equitable.

Robots as amplifiers, not replacements

Rus opens with a reframing: robots are neither saviors nor villains; they are instruments. When a human pathologist (3.5% error rate) reviews lymphoma slides with an assistive AI, the combined system’s error drops to 0.5%. Zipline’s autonomous drones move blood and vaccines across rough terrain, turning hours into minutes and saving lives. These are not examples of displacement; they’re examples of augmentation—humans decide what matters; machines handle scale and speed.

Core idea

“Robots are tools. They aren’t inherently good or bad. The value depends on what you choose to do with them.”

The sense–think–act foundation

Every successful robot follows the same arc. It senses (cameras, lidar, tactile skin, wearables), thinks (perception, planning, learning), and acts (motors, soft actuators, exoskeleton assistance). You keep human oversight—guardian autonomy, shared control, and local safety loops ensure you can interrupt and steer. This is why Rus stresses architecture: don’t push braking decisions to the cloud; keep fast loops on board, and layer autonomy to hand off control gracefully when the machine is uncertain.

From bodies and brains to co-design

You don’t bolt software onto a body and hope for the best. The book shows you how co-design—optimizing hardware and control together—yields task-specific, reliable robots faster. Self-driving cars illustrate the software stack (perception, localization, planning, control), while computational making (3D printing, laser cutting) turns digital designs into parts quickly. Rus’s lab uses simulation to explore thousands of body–brain pairings, then fabricates the best candidates, closing the loop between models and the real world.

The last centimeter is the hardest

Roaming through free space is easier than touching the world. Everyday tasks—unscrewing a jar, lifting a wine glass—require dexterous contact. Rus explains why soft, sensorized hands (tulip grippers, compliant fingers) offload complexity from software. Rocycle squeezes ambiguous items to “feel” whether they’re paper or plastic; a three-finger grip stabilizes while a fourth finger explores. The takeaway: better hands make smarter robots because mechanics, sensing, and control co-evolve.

Learning that transfers and explains itself

Robots learn by doing—often first in simulation. Reinforcement learning and imitation learning let systems practice at scale (OpenAI’s Rubik’s Cube hand, Pulkit Agrawal’s cheetah). But Rus warns: opaque deep nets can be brittle and biased. She introduces liquid networks—compact, causal models (a 19-neuron driver) that reveal what they attend to (pavement, horizon) and offer proofs about behavior. The goal is accuracy plus interpretability, not black-box cleverness for safety-critical tasks.

Extending reach, restoring strength, and buying back time

Drones scout whales without disturbance (Falcon), SoFi swims among fish, and snakebots slip through gaps. Teleoperation rooms (Oculus + Baxter) let you guide robots through dangerous “last miles.” Soft exoskeletons (FOAM muscles, Rob Wood’s thin sensors, AFFOA textiles) lighten loads in warehouses (Verve), aid rehab, and preserve mobility with age. In homes and hospitals, automation aims to free time for high-value human work—Roomba cleans, autonomous wheelchairs return therapists’ minutes to care.

Safety, equity, and shared prosperity

The book insists on responsibility: certify systems like we certify drugs; design for security (remember the hacked Jeep Cherokee) and human-aware safety (avoid the chess-robot finger incident). Rus offers an 11-attribute checklist (Safe, Secure, Assistive, Causal, Explainable, Equitable, etc.) and a workforce plan: teach computational thinking early (Bee‑Bots, Scratch), expand maker skills, and fund mid-career reskilling (Amazon Upskilling, Bit Source) so automation augments rather than polarizes employment. (Note: this echoes the historical pattern David Autor describes—automation shifts tasks and can raise demand when paired with new skills.)

Bottom line: pair the heart with the chip. Use robots to extend human reach, restore capability, and reclaim time—while demanding safety, explainability, and education that let everyone participate in the gains.


Bodies, Brains, and Co‑Design

Rus brings you from vision to practice: a robot is a body (chassis, actuators, sensors, power), a brain (onboard compute, algorithms), and a communication substrate—co-designed to do a job. You can’t hang a heavy lidar on a snakebot or count on cloud latency when braking a car. Successful systems marry morphology and control from day one, then iterate with realistic simulators and rapid fabrication.

The autonomy stack, demystified

Using self-driving as a teaching case, Rus breaks down the stack: perception fuses cameras, lidar, radar; localization estimates where you are relative to an HD map; planning searches configuration space for safe, efficient paths; control turns waypoints into torques. Dozens of specialized modules—not a single “AI”—work in concert. You test them in simulation first, then on closed tracks, then on roads with layered safety (guardian autonomy, fallback rules).

Co-design as search over morphologies and controllers

Instead of picking a body then forcing a controller to cope, you jointly optimize. Define objectives (speed, payload, energy), parameterize both body and brain, and let algorithms explore thousands of candidates. The output is a pareto front of designs—no metric improves without another worsening—so you choose a fit-for-purpose compromise, print parts, and test. This approach reduces time-to-prototype and democratizes design: coding and objectives become as important as deep mechanical expertise. (Note: success hinges on faithful simulators; you must iteratively tune friction, compliance, and sensing to match reality.)

Computational making and local fabrication

Rapid prototyping tools—3D printers, laser cutters, soft-matter casting—let you turn digital blueprints into hardware in hours, not months. Rus recounts how early access to 3D printing transformed her lab; the same tools now fit in community makerspaces. This enables just-in-time parts, local repair, and custom robots for niche tasks (e.g., disaster response bots tailored to unusual debris patterns).

Shape-shifting systems: programmable matter in practice

Co-design scales to swarms. M‑Blocks use electro‑permanent magnets and flywheels to hop, attach, and reconfigure; Pebbles miniaturize that idea. The same principles power Roboats in Amsterdam: rectangular boats dock with origami-like arms to form pop-up bridges or stages. You get environment-as-robot: structures become task-adaptive. Full “robotic sand” remains aspirational, but modular furniture, transforming car trunks, and civic infrastructure are near-term wins. (Compare to Neil Gershenfeld’s “programmable matter” and Radhika Nagpal’s swarm robotics for conceptual kin.)

Safety by design, loops kept local

When milliseconds matter, keep the control loop onboard. Rus argues for architectural safety: redundant sensors, watchdogs, and fail-safe behaviors that don’t depend on connectivity. Pair this with human-in-the-loop modes—teleoperation takeovers, simple interfaces—for edge cases. The net effect: bodies and brains that are not only optimized for performance but also architected for predictability and resilience.

Takeaway: treat robots as systems. Co-design morphologies and controllers, prototype fast with computational making, and extend the idea to modular swarms that reconfigure the world to the task.


Touch, Hands, and Manipulation

What you do without thinking—zip a jacket, twist a jar, lift a bowl—pushes robots to their limits. Rus shows that manipulation is the “last centimeter” problem: contact introduces friction, deformation, and slip that demand sensitive hardware and smart control. The path forward is to let the body help the brain—compliance and tactile sensing absorb uncertainty so software doesn’t have to compute every contact point in advance.

Why touch changes everything

A lightbulb presents glass for one finger and metal threads for another. You instinctively balance forces; a robot needs distributed sensing and grip control to avoid dropping or crushing it. Soft, sensorized fingers detect incipient slip and adjust pressure. This equates to “mechanical intelligence”: hardware that constrains errors and simplifies planning.

Soft grippers that conform and generalize

The tulip gripper—silicone skin over an origami skeleton—collapses around caps and handles, securing objects via vacuum without perfect pose estimates. Rus’s Rocycle adds metal sensors and squeeze-based classification: when vision is uncertain, the hand tests deformability to decide paper vs. plastic. These examples expand usable object sets and reduce brittleness compared with rigid two-finger pincers that require precise planning.

Three fingers to hold, a fourth to explore

Building on Ken Salisbury’s intuition and Rus’s PhD work, the book highlights a pragmatic pattern: three fingers stabilize, a fourth “walks” to discover shape and adjust. This split of duties enables in-hand manipulation like reorienting a screwdriver or peeling tape without regrasping. It also curbs computational load by delegating fine adjustments to compliant mechanics.

Perception, planning, and the last centimeter

Even with soft hands, perception and planning remain critical. Vision estimates pose; tactile arrays refine contact; controllers blend force and motion (impedance control) to trace trajectories that tolerate uncertainty. Rus’s anecdotes—from cake cutting to Bakebot’s cookie assembly—show how visual recognition, inverse kinematics, and force feedback must cohere for kitchen-grade reliability. (Note: for microscale precision, e.g., surgery, systems like da Vinci favor rigid precision and high-fidelity control, reminding you that “soft vs. rigid” is task-specific.)

From lab demos to everyday chores

Folding laundry (PR2, FoldiMate) and swapping a lightbulb reveal gaps between demos and products: deformable cloth, occlusions, and edge cases explode complexity. Rus’s message is practical: invest first in hands and sensing. Smarter end-effectors generalize across object families, cut down on per-object coding, and accelerate learning from demonstration. When the body does more of the thinking, your robot spends less time stuck on the last centimeter—and more time finishing the job.

If you’re evaluating a manipulation solution, ask three questions: How does it feel slip? How does it adjust grip without reprogramming? And how does it learn from new objects quickly? Robots that answer all three will move beyond showcases into your home, warehouse, or clinic.


Learning That Explains Itself

Robots improve by practicing—and simulation lets them practice safely at scale. Rus surveys reinforcement learning, imitation learning, and sim-to-real transfer, then makes a pivot many skip: accuracy alone isn’t enough. You also need models that are causal, auditable, and robust to adversaries and bias—hence her focus on explainable methods like liquid networks for safety-critical autonomy.

Practice in pixels: RL and imitation

Reinforcement learning (RL) discovers policies by trial and error. Pulkit Agrawal’s simulated cheetah explores thousands of clumsy gaits until it runs efficiently. OpenAI’s Shadow Hand solves a Rubik’s Cube by training in simulation before transferring to hardware. Imitation learning jump-starts competence by copying human behavior—Boston driving data maps observations to actions—but it inherits dataset gaps. Rus’s team blends both: seed with real drives, then expand in VISTA simulation to conjure edge cases (erratic drivers) you can’t safely stage on roads.

Crossing the sim-to-real gap

Domain randomization is the bridge. Vary friction, mass, textures, lighting, and sensor noise during training so the learned policy tolerates reality’s messiness. OpenAI hardened its Rubik’s policy by randomly changing cube size and friction until the hand shrugged off blankets or plush toys. Rus also champions distributed learning: 300 Baxters sharing grasps could learn a million picks in two weeks, turning fleet data into collective skill (think of how Tesla and Waymo leverage fleets).

The black box problem: brittleness and bias

Large deep nets can flip a 98%-certain “dog” into an “ostrich” with tiny pixel tweaks. That’s entertaining online but dangerous on roads—a perturbed stop sign misread as a yield sign invites catastrophe. Bias compounds risk: models trained on skewed histories (e.g., loans favoring certain groups) replicate injustice unless you audit, rebalance, and stress-test. Rus and Alexander Amini show how uncertainty analysis surfaces underrepresented subpopulations for targeted data augmentation.

Liquid networks: small, causal, inspectable

Inspired by the 302-neuron C. elegans, liquid networks use neurons governed by differential equations with adaptive time constants. They stay tiny yet expressive, and their dynamics are mathematically analyzable. In a driving task, a 19-neuron liquid network focused on pavement and horizon—features you’d expect—while a 100k+ deep net scattered attention. You can extract decision trees from liquids, proving causal focus and aiding certification. (Note: liquids are not universal—vision-scale representation may still favor large models—but they shine where you must guarantee attention and behavior.)

Practical recipe for safe learning

Use simulation early for breadth, mix imitation for quick competence and RL for robustness, randomize heavily for transfer, and cap the loop with interpretable controllers. Pair big perception backbones with small, auditable control modules (liquids or classic model-based controllers) for safety-critical decisions. Bake in adversarial testing and dataset audits as standard QA, not afterthoughts. Plan for compute and energy costs—RL can be expensive; you don’t want training economics to gate safety.

Bottom line: make your robots learn a lot—but also make them explain what they learned, why they act, and how they’ll behave when the world misbehaves back.


Wearables, Muscles, and Energy

If you could wear a robot like clothing, what would you regain? Rus explores soft exoskeletons and artificial muscles that protect backs, extend endurance, and restore mobility—then grounds the vision in hardware realities: actuators, sensors, batteries, and on-device AI. The promise is intimate assistance you forget you’re wearing; the challenge is delivering strong, safe, energy-efficient help in a discreet form factor.

Soft exoskeletons you can live in

Forget clanking suits. You meet thin, compliant actuators like FOAM (fluidic origami-inspired artificial muscles) that expand with air to deliver high strength-to-weight, and soft strain sensors (e.g., Rob Wood’s) woven into textiles to read muscle intent. Computational textiles (AFFOA) conduct, store, and even compute—turning garments into platforms. Early systems like Cyberdyne’s HAL and Verve’s soft exosuits show impact: a 20‑lb box feels a dozen pounds lighter, and long shifts get safer.

Use cases: rehab, work, everyday life

In rehab, soft suits retrain gait after spinal injuries (John Hollerbach’s work appears here). On construction lines, microbursts of assistance reduce repetitive strain and prolong careers. In daily life, a backpack could offload books; a tennis sleeve could coach your forehand (Rus riffs on Serena Williams) while protecting your elbow. Personalization is key: capture your motion data privately (differential privacy), fit the model to your gait, and adapt over time.

Hardware priorities beyond muscles

Actuators need to be compliant, powerful, and electrically driven to ditch bulky pumps. Batteries remain the bottleneck: you want flexible, higher‑density cells, structural batteries integrated into frames, and supplemental harvesters (walking energy, thin-film solar). Rus cites promising materials (Tomás Palacios) and paper-like solar advances (Vladimir Bulović) to stretch runtime. Denser sensing (lidar costs dropping; skin-like tactile arrays rising) and efficient AI chips bring intelligence to the edge without burning watts.

Safety, security, comfort, cost

Close-to-body robots demand extra care. Encrypt links to prevent malicious torque commands; add mechanical fail-safes; keep thermal loads low. Make garments breathable (micropores), easy to don, and simple to clean. Drive down costs with scalable fabrication and standardized modules so a clinic—or even a community makerspace—can fit suits within days, not months. (Note: medical devices face regulatory hurdles; certification pathways like those for pacemakers offer a model.)

From concept to closet

Rus’s advice is practical: if you’re building wearables, integrate thin actuators, textile sensors, and on-garment inference hardware; if you’re buying, look for private-by-design learning, comfort over hours of wear, and clear safety interlocks. As batteries, actuators, and AI chips improve together, wearable robots will move from labs and warehouses to your closet—helping you hike farther, lift safer, and age with confidence.

The broader lesson: great bodies unlock great autonomy. Solve muscles and energy, and the rest of the stack—perception, learning, control—has room to shine.


Extend Reach, Reclaim Time

Robots act as your distant eyes, ears, and hands—then give you hours back by taking on drudgery. Rus connects expeditionary tools (drones, snakebots, robotic fish) with everyday automation (smart mobility, hospital logistics) to argue that the highest-value impact of robotics is time: time you redirect to care, creativity, and community.

Throwing your senses farther

Aerial drones like Falcon let Roger Payne census whales from cliffs without disturbance. SoFi glides among fish, observing unobtrusively. OceanOne lets you “feel” underwater artifacts at 100 meters; snakebots like FLX Bot slither into cracks for inspection and repair. Even on Mars, Ingenuity scouts routes. The common thread is respectful presence: extend perception without altering ecosystems or endangering people.

Teleoperation and the hybrid labor loop

Autonomous robots handle the routine; humans step in for the odd and risky. When a system gets stuck, you—or a remote worker—drop into an Oculus-powered station to guide a Baxter through the last tricky step. Cold-storage warehouses, disaster zones, and nuclear sites all benefit from this “human-in-the-loop autonomy.” It’s also a training loop: every teleop episode becomes new data for the robot to learn from.

Mobility and chores: turning hours into minutes

Zipline’s drones change clinical outcomes by compressing delivery windows from hours to minutes. On roads, guardian autonomy—shared control with smarter roads and V2X—turns commutes into safe, productive time before full autonomy is ubiquitous. At home, Roomba handles floors; in the kitchen, Bakebot shows what’s possible (and what’s hard) for food prep. In hospitals, autonomous gurneys and self-driving wheelchairs shift precious minutes back to therapists and nurses.

Design for human time

Rus offers a litmus test: ask what time a robot frees and for whom. Are you moving tedium from a nurse to a patient, or actually reclaiming care time? Center workflows, not novelty. Build shared platforms—fleets of land, air, and sea robots—that democratize access to distant places for researchers, journalists, and local communities. Plan around constraints—battery life, autonomy limits, ethics of remote presence—but keep the goal in view: give people more time for the heart’s work.

The upshot: extending reach and reclaiming time are two sides of the same coin. When robots push the frontier outward and pull drudgery downward, you can invest your hours where they matter most.


Safety, Certification, and Work

Power without guardrails breeds avoidable harm. Rus closes the loop with a pragmatic ethics and policy program: certify safety like the FDA certifies drugs, design for security and human awareness, and prepare the workforce so automation augments more than it displaces. The goal isn’t to slow progress—it’s to steer it.

From cinematic fears to real risks

The scary failures aren’t sci-fi uprisings; they’re foreseeable engineering misses. Hackers remotely controlled a Jeep Cherokee’s steering and brakes because security wasn’t first-class. A chess robot grabbed a child’s finger because it lacked human-aware safety and robust perception. These incidents argue for systematic testing, red teaming, and design norms that assume humans—especially children—will behave unpredictably around robots.

A checklist for responsible robots

Rus proposes eleven qualities: Safe, Secure, Assistive, Causal, Generalizable, Explainable, Equitable, Economical, Certified, Sustainable, and Impactful. Treat this as a project scorecard. For example, use interpretable controllers (liquid networks) to meet “Causal” and “Explainable,” audit and rebalance datasets for “Equitable,” and measure energy budgets for “Sustainable.” Certification bodies—sector-specific at first—should test and monitor safety-critical deployments, much like aviation, medical, and food safety regulators do today.

Practice change

Adopt adversarial testing, require audit trails for decisions, and include diverse stakeholders in “what could go wrong” workshops that end with concrete mitigations.

Education and the future of work

Robots automate tasks, not whole jobs, but task shifts can polarize wages if you don’t invest in people. Rus points to evidence from factories (the “Green Hulk” line hired more after automation) and to macro studies showing productivity can expand employment when paired with reskilling. She calls for computational thinking in K‑12 (decomposition, abstraction), maker skills in every school (Bee‑Bots in Fiji, Scratch), and lifelong learning through microcredentials, apprenticeships, and employer programs (Amazon Upskilling, Bit Source’s coal-miner coders).

Policy to spread the gains

Fund reskilling tied to regional industry, incentivize companies to provide apprenticeships, and create local fabrication hubs so communities can build and repair their own tools. International coordination matters—ethical norms differ, but safety, auditability, and human rights should travel across borders. (Note: think of the pandemic’s rapid, regulated vaccine push as a model of fast, multidisciplinary mobilization.)

In short: embed ethics into design and institutions, then equip people to thrive with the tools. That’s how you keep the heart with the chip—and the benefits with the many, not the few.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.