The Technological Republic cover

The Technological Republic

by Alexander C. Karp And Nicholas W. Zamiska

Two senior leaders at Palantir Technologies enumerate what they see as potential global threats to the United States.

Rebuilding the Technological Republic

How do you steer advanced technology toward public purpose without sacrificing liberty or speed? In The Technological Republic, Alex Karp and Jacob Zamiska argue that your society wins or loses in the software century by rebuilding a mission-driven partnership between the engineering vanguard and the democratic state. They contend that America’s postwar synthesis—Vannevar Bush’s wartime-science model extended into peacetime via DARPA—built the conditions for the internet, satellites, and semiconductors. That alliance fractured as Silicon Valley drifted to consumer toys and reputationally safe work, just as AI emerged to redefine both economic value and military power.

You live at a hinge moment. Large language models, robotics, and autonomy show “sparks” of general reasoning ability, yet their internal logic remains opaque. Strategic rivals exploit this ambiguity ruthlessly; democratic cultures often argue themselves into paralysis. The authors reject a blanket pause. They want you to accelerate capability for national defense and public goods while building hard regulatory moats around high-risk systems (power grids, air traffic, nuclear command) where uncontrolled autonomy is unacceptable.

Where we began—and why it worked

The book opens by reminding you that the “tech-state” compact was not an accident. Roosevelt’s 1944 letter to Vannevar Bush framed science as national service; J.C.R. Licklider at DARPA funded research that enabled ARPANET and early “Man-Computer Symbiosis.” Fairchild and Lockheed built reconnaissance hardware for national aims. The state didn’t merely write checks; it set missions that channeled talent toward problems markets would not touch. That union produced scale and direction private capital rarely sustains on its own.

Core claim

“The union of science and the state…arose in the wake of World War II.”

How we drifted—and what it cost

Across decades, you watched technologists pivot from satellites to social feeds. Zynga, Groupon, and eToys exemplified a market craze for engagement and quick monetization. Meanwhile, public problems—defense software, large-scale medical research, city safety—became “innovation deserts” because they were messier, slower, and politically fraught. The cultural tenor shifted toward optionality: avoid commitments, dodge controversy, and keep the exit open. That optionality generated a vacuum in which national capacity atrophied just as AI began to reshape power.

AI at the crossroads—ethics and strategy

Modern AI revealed unsettling capability leaps. Sébastien Bubeck’s GPT‑4 tests (stacking a book, eggs, a laptop, a bottle, a nail) showed stepwise reasoning that earlier models lacked; the Unicorn Drawing Test suggested developmental maturation. Public episodes—LaMDA transcripts, Bing chat’s theatrics, and the March 2023 pause letter—exposed how quickly fear spreads. The authors argue you must treat this as a strategic-risk problem, not a metaphysical one: fence fast-moving AI where it can cause catastrophic harm, but push capabilities where they protect liberal societies and save lives (Note: this stance contrasts with Eliezer Yudkowsky’s call for extreme halts).

Hard power matters—now defined by software

The “Winner’s Fallacy” tempts you to assume past victory persists. It doesn’t. Deterrence in this century rides on code: swarming drones, sensor fusion, and decision aids. Thomas Schelling’s reminder—“the power to hurt is bargaining power”—means beliefs must be backed by credible tools. Yet in 2024, only about 0.2% (~$1.8B) of an ~$886B U.S. defense budget targeted AI capabilities, a mismatch with the tempo of software innovation. Meanwhile, competitors demonstrate swarm autonomy (Zhejiang University’s bamboo-forest experiment) and lead benchmarks in facial recognition.

Culture: belief, leadership, and execution

Rebuilding the technological republic requires more than money. You must revive public belief and reward conviction (think Aryeh Neier defending Skokie speech or Pauli Murray insisting George Wallace be heard). Inside organizations, you need engineering pragmatism (Dewey’s “muddy stream,” Ohno’s Five Whys), swarming structures that empower local scouts (Lindauer’s bees), and improvisational cultures that manage status fluidly (Keith Johnstone) rather than enforce brittle hierarchies (Asch and Milgram warn against conformity). Founder-led aesthetic judgment and shared ownership help sustain long-horizon focus (empirical “founder premium” studies back this claim).

What you do next

You rebuild a compact where government sets urgent missions and buys commercial software at speed; where technologists embed with users (as Palantir engineers did with soldiers confronting IEDs) and accept civic controversy in exchange for saving lives; where policing tools are governed by transparency and strict admissibility instead of abandoned to blanket bans; where public service pays and honors competence (Lee Kuan Yew’s salary model) and national identity is deliberately nurtured (Renan’s “vast solidarity”).

Key insight

“One age of deterrence, the atomic age, is ending, and a new era of deterrence built on AI is set to begin.”

In short, if you want liberal democracy to endure, you must fuse belief with engineering, accelerate software for defense and public benefit, and build institutions that convert dissent and data into working systems. The alternative is to leave your fate to whoever moves fastest—regardless of whether they serve the public interest.


The Lost Alliance Rewound

Karp and Zamiska start by recovering a story Silicon Valley often forgets: its greatest leaps came from a close partnership with the state. You see Vannevar Bush convert wartime coordination into peacetime missions; you meet J.C.R. Licklider at DARPA funding research that evolved into ARPANET; you watch Fairchild and Lockheed deliver reconnaissance systems that underwrote American security. The government didn’t just subsidize—it defined goals that markets alone would ignore because payoff horizons were long and risk high.

From mission to merchandise

Over time, the center of gravity shifted. Entrepreneurs began to prize convenience and virality—“things people love”—over public missions. The late 1990s and 2010s saw capital chase consumer toys: eToys skyrocketed then imploded; Zynga and Groupon rode engagement manias. This was more than a portfolio preference; it was a civic misallocation. High-friction public needs—defense software, medical R&D at scale, crime reduction—were starved of talent because error costs were political and rewards were slow.

Cultural oxygen depletion

A parallel cultural change amplified the drift: elites grew wary of public belief and durable commitments. Optionality became a virtue—keep exits open, avoid controversy, maximize reversibility. Harvard graduates poured into finance and consulting. Inside tech, “agnostic” builders elevated product and investor returns over civic mission. The market rushed to fill the void left by retreating public purpose with consumer gratification. The result: whole sectors became “innovation deserts.”

Case study: public safety’s chilling effect

Consider Palantir’s Gotham in New Orleans. Detectives, drowning in siloed data, adopted it as a “one‑stop shop” to link victims, suspects, and witnesses. Yet civil-liberties groups framed it as militarization, and the broader industry reacted with performative restraint—Amazon and IBM publicly restricted certain law-enforcement technologies. The message to builders was clear: touch public safety, and you risk brand and career. The authors ask you to weigh the unseen cost—lives lost when innovation withdraws—against legitimate concerns about privacy and bias.

Prescription

Do not moralize technology out of the public square. Regulate and audit it so it saves lives without becoming an engine of abuse.

Reweaving the compact

To repair the split, the authors want you to combine Silicon Valley’s obsession with outcomes with the state’s capacity for scale. That means mission-first procurement (set clear goals, buy commercially, iterate quickly); embedding builders with frontline users (engineers working shoulder-to-shoulder with analysts, soldiers, doctors); and building institutional pathways that make public projects a prestigious default, not a reputational hazard (a “technological peace corps” is floated as an example).

Guardrails instead of withdrawal

The authors push for differentiated governance: strict legal admissibility standards, transparency, and independent audits for tools with coercive power (e.g., gait recognition, drones). This is the middle path between laissez‑faire and blanket bans. It respects Blackstone’s ratio that protects the innocent, without surrendering cities to status-quo violence. You build trust by exposing model performance to scrutiny and by making policy choices reversible through oversight—not by staging corporate virtue-signals that leave communities underserved.

Why this matters now

AI’s acceleration makes the old division of labor obsolete. If you leave public problems to “the market,” you cede them to sectors optimized for advertising, not for defense or safety. If you leave everything to the state, you invite bureaucratic sclerosis. The lesson from Bush to DARPA is simple: you get durable breakthroughs when you fuse mission clarity, patient funding, and entrepreneurial execution. Recreate that bond, and you stand a chance of navigating the risks of autonomy and the opportunities of AI.

(Note: This argument rhymes with Mariana Mazzucato’s “entrepreneurial state,” but Karp and Zamiska emphasize cultural courage and organizational design—swarms, improv, founder taste—as the catalytic ingredients that translate funding into software that actually ships.)


AI’s Crossroads and Strategy

You face a dual reality: AI systems already demonstrate surprising reasoning, and you still do not understand their internal mechanics. That combination—capability plus opacity—makes AI a strategic asset and a public risk. The authors urge you to treat it as such: a governance challenge that requires targeted constraints, not a civilizational freeze. Build high walls around autonomy in critical infrastructure; move fast where AI can harden defense, reduce violence, and deliver public benefits.

Evidence of a leap

Sébastien Bubeck’s GPT‑4 experiments showed stepwise, common‑sense strategies—how to stack unstable objects safely—that earlier models bungled. The “Unicorn Drawing Test” suggested developmental progression akin to a child’s maturation. When you add robotics—drones, manipulators—the risk profile multiplies. The book catalogs episodes (LaMDA transcripts, Bing’s chat theatrics, the March 2023 open letter) to show how fragile your social trust is when systems convincingly simulate agency and intimacy.

Why pausing fails

Calls for broad moratoria (e.g., Eliezer Yudkowsky) collide with geopolitical competition. If adversaries press forward, unilateral restraint is not safety—it is strategic self‑harm. The authors reject blanket pauses and argue for a sharper line: accelerate AI in domains where it protects democratic societies and deters coercion, while building “regulatory moats” around autonomous integration points (grid control, air traffic, nuclear command) where failure modes are catastrophic and untestable in the wild.

Deterrence requires visible capability

Invoking Thomas Schelling, the authors press a hard point: “The power to hurt is bargaining power.” In plain terms, your soft power talks louder when your hard power is credible. The “Winner’s Fallacy” lulled the West into complacency. Meanwhile, Chinese entities lead facial‑recognition benchmarks (CloudWalk among the leaders) and Zhejiang University demonstrated autonomous swarms in dense bamboo—an image of urban‑warfare relevance. In the U.S., employee protests (Google’s Project Maven exit, Microsoft’s Army headset controversy) telegraphed ambivalence to defense partnerships, widening a capability gap with moralistic theater.

A practical risk posture

The authors want you to design for failure containment. Treat autonomy like aviation: certification regimes, red‑team penetration, and strict bounds on unsupervised action in safety‑critical contexts. Invest in interpretable logging, independent auditing, and continuous evaluation—especially when AI touches coercive power. At the same time, don’t strand life‑saving applications—medical triage, disaster response, IED detection—because of reputational squeamishness. The right question is not “Is it risky?” but “Where, how, and under what guardrails does the risk justify the benefit?”

Author’s caution

“We have now…arrived at a similar crossroads in the science of computing, a crossroads that connects engineering and ethics.”

Culture shapes capability

Technology alone does not choose aim. Authoritarian states can act like founder‑CEOs of nations: they make decisive, coordinated bets. Democracies need a different engine—belief, transparency, and mobilized talent—to match that speed without sacrificing legitimacy. That means rewarding engineers who accept the messiness of public problems and building institutions that welcome dissent while insisting on delivery. In this framing, safety and speed are not enemies; they are codependent. If you don’t move fast, you can’t secure the freedom to argue about how to move safely.

Bottom line: don’t outsource your strategy to fear or to markets alone. Build capability where it deters harm, wall it where failure is intolerable, and reconnect the makers of code to the people and missions their tools must serve.


Deterrence Shifts to Software

The atomic age constrained superpowers with the arithmetic of megatons. The software century redefines advantage with code. Karp and Zamiska argue that AI‑enabled systems—drone swarms, decision aids, integrated cyber‑physical networks—will determine deterrence more than legacy hardware. The economics changed: software scales fast, updates overnight, and can be built by small, focused teams inside private firms. Your doctrine, budget, and procurement must change with it.

Why code is different

Nuclear weapons demanded industrial‑state mobilization and rare physics talent. Modern autonomy often requires compact teams, vast data, and deployment pipelines that iterate in days. General Mark Milley’s provocation—will manned aircraft rule the skies in 2088?—captures the intuition: cheap, intelligent masses can overwhelm exquisite, slow‑to‑update platforms. Zhejiang University’s bamboo‑forest swarm hints at what dense, urban autonomy looks like; cost curves favor those who iterate code fastest.

A budget that signals the past

The U.S. defense budget of ~ $886B in 2024 devoted about $1.8B (~0.2%) to AI. That imbalance isn’t quibbling over line items; it reflects a structure built for jets and carriers, not software’s tempo. Traditional acquisition cycles stretch across years; they seek certainty before fielding. Software wants the inverse: field minimal capability, measure, iterate, and push updates as learning accrues. If you buy code like you buy planes, you lose.

Procurement for the software century

The authors propose a Manhattan Project–style mobilization for AI, but with twenty‑first‑century tooling: agile procurement that buys commercial software off the shelf, contracts that pay for outcomes not paper milestones, and pipelines that embed builders with users. Palantir’s engineers in Afghanistan iterated side‑by‑side with soldiers to mitigate IED threats—what they call “building a better rifle” in software form. This posture substitutes proximity for speculation: if you stand where the problem lives, your code ships to reality, not to PowerPoint.

Organizing like a swarm

Biology offers a template. Martin Lindauer’s bees (the Eck Swarm) converge on a new home through local scouting and waggle‑dance signals. In the best engineering teams, you empower “scouts” closest to the problem, you aggregate signals instead of mandating answers, and you tolerate friction as information. Keith Johnstone’s improv adds the social layer: fluid status, fast handoffs, no ritualized deference. When you design teams like swarms and troupes, you out‑maneuver adversaries locked in hierarchy.

Winning with delivery, not doctrine

Doctrine shifts when systems ship. That requires courage from both sides of the compact: leaders who elevate software to first‑order status and companies that accept mission constraints. The objective isn’t ideology; it’s working capability that deters aggression. If you realign budgets to software, shorten cycles, and embed engineers, you can saturate the field with adaptive tools before adversaries can copy them. If not, you’ll spend trillions on platforms that code‑dense adversaries route around.

Key line

“One age of deterrence…is ending, and a new era of deterrence built on AI is set to begin.”

The upshot for you: advocate budgets that privilege software and data; demand contracts that measure outcomes; and insist on organizational forms that reward local knowledge and speed. Deterrence is no longer just tonnage and range; it’s deployment cadence, model quality, and the cultural will to ship.


Cultures That Ship Reality

To build consequential technology, you need a culture that prizes reality over rhetoric. The authors call this the engineering mindset: descend into messy implementation, observe closely, and fix root causes without vanity. It is not a vibe; it is a discipline you can practice, one that resists conformity and moral theater while elevating delivery and learning.

Pragmatism over purity

John Dewey tells you to get down into the “muddy stream of concrete things.” Herbert Hoover notes engineers’ errors are visible to all; software either runs or it doesn’t. Taiichi Ohno’s Five Whys operationalizes this: ask why until you hit the real lever. A missed release often traces not to a flaky test but to a leadership feud or misaligned incentives. Palantir ran thousands of such reviews, writing them up to explain systems rather than scapegoat people.

Practical takeaway

Keep asking why. The fix is usually structural and interpersonal, not just a patch.

Designing for dissent

Asch’s conformity and Milgram’s obedience experiments warn you: groups punish dissent even when truth is obvious. To escape that trap, engineer independence into your processes—solicit private forecasts before group discussion, assign rotating devil’s advocates, and reward empirically grounded disagreement. Philip Tetlock’s “foxes,” not hedgehogs, make better predictions because they hold many small, revisable hypotheses instead of one grand theory.

Swarms and improv in practice

Martin Lindauer’s honeybee swarms offer an architecture: scouts with local knowledge, signals (the dance), and a convergent decision that emerges from competition. Keith Johnstone’s improv adds the tactic: fluid status transactions so that the best idea, not the highest rank, leads. At Palantir, status is treated as a tool—elevated when it unblocks progress, stripped when it ossifies. In Afghanistan, engineers embedded with soldiers turned proximity into iteration cycles that saved lives.

Rituals that make it real

If you lead, install rituals that operationalize this culture: weekly Five Whys on incidents; pre‑mortems before launches; red‑team drills; field embeds where builders shoulder user pain. Insist on written explanations that trace human incentives, not just stack traces. Use dashboards that record predictions and scores, creating a memory that punishes confident wrongness and rewards calibrated bets.

The payoff

Cultures like these out‑learn slower rivals. They convert friction into data, dissent into design, and errors into better code. In a software‑defined world where your opponent iterates nightly, the organization that metabolizes reality fastest—not the one that talks the most elegantly—wins.

(Note: This ethos aligns with “lean” and “DevOps” traditions; the book’s distinctive contribution is tying those micro‑habits to strategic survival in the age of AI and to civic projects, not just commercial uptime.)


Belief, Identity, and Leadership

Technology will not save the republic if the republic loses faith in itself. The authors argue that your public culture has abandoned belief—leaders fear staking clear positions, and elites prize survival over conviction. That aversion to commitment drains talent from public problems and weakens the tech‑state compact just as AI raises the stakes.

Courage versus caution

Contrast Aryeh Neier’s ACLU defending Nazi speech in Skokie or Pauli Murray urging Yale to permit George Wallace to speak, with university presidents’ hedged testimony in 2023. The earlier figures accepted reputational cost to defend principles. Today’s administrative risk‑management narrows the pool of people willing to lead collective projects. Scholarship shows invasive media reduces high‑quality political entrants; who volunteers for a life of gotchas?

Renan’s reminder

A nation is “a vast solidarity…constituted by the sentiment of the sacrifices one has made and of those one is yet prepared to make.”

Optionality’s downside

“Technological agnostics” prefer reversible bets. Harvard career flows toward finance/consulting and the eToys/Zynga era illustrate how markets reward low‑friction consumer plays. But public problems—defense software, medical breakthroughs, urban safety—demand long horizons and controversy tolerance. When companies like Amazon and IBM retreat from law‑enforcement technologies en masse, they may win applause while cities lose tools that might have reduced killings (Palantir’s New Orleans work showed both promise and the need for strict oversight).

Founder taste and shared stakes

Great technology requires aesthetic judgment—clarity about what good looks like. Studies show a “founder premium”: founder‑led firms outperformed by ~4.4 percentage points annually (1992–2002), and founder‑led S&P companies produced more widely cited patents. Founders often bind themselves to a mast (Odysseus) by constraining options to sustain a vision. Silicon Valley’s broad equity ownership aligned assistants and engineers in a shared bet—an internal culture of stewardship that outlasts fads.

Pay people to serve—and mean it

Public piety that demands monkish salaries backfires. Jerome Powell earns ~$190,000 leading the Fed—less than a first‑year banking associate—signaling misaligned incentives. Lee Kuan Yew tied ministerial pay to private‑sector comparators (cited 2007 average: ~$1.26M), arguing that real people have real obligations. Singapore’s governance and economic rise (GDP per capita from $428 in 1960 to ~$84,734 by 2023) are presented as evidence that paying for talent can enable national transformation. Hyman Rickover’s career complicates the purity test: abrasive, imperfect—and the father of the nuclear submarine.

Shared culture on purpose

Modern nations need thin but real bonds across millions of strangers. Robin Dunbar caps your close circle at ~150; Benedict Anderson calls nations “imagined communities.” Lee Kuan Yew’s deliberate cultural engineering (e.g., Speak Mandarin) demonstrated that identity can be cultivated without erasing pluralism. The authors criticize post‑national theories that refuse to advocate any substantive civic culture; the market fills that vacuum with sports and celebrity, not sacrifice and mission.

If you want technologists to build for the republic, you must make public belief speakable again, compensate service competitively, and design governance that holds powerful tools to account without scaring away the people who might wield them responsibly.

Dig Deeper

Get personalized prompts to apply these lessons to your life and deepen your understanding.

Go Deeper

Get the Full Experience

Download Insight Books for AI-powered reflections, quizzes, and more.