Execution is becoming cheap. The implications are more far-reaching than most people realize.
For most of history, the bottleneck was capability. Could you build it? Could you ship it? Could you find people skilled enough to turn the idea into a thing? The answer was usually no. Ideas were cheap. Execution was everything. The world rewarded those who could do, not those who could imagine. The venture capitalist's credo ("ideas are worthless, execution is everything") was not cynicism. It was an accurate description of where scarcity lived.
This was a structural feature of an economy built on symmetric intelligence. When every node in the graph has roughly equivalent cognitive capacity, the binding constraint is coordination and labor — assembling enough human minds, for enough time, with enough resources, to produce something. The idea is the easy part. The ten thousand hours of implementation are the hard part. A brilliant product vision is worth nothing without the team to build it, the capital to fund it, the operational capacity to ship it. Execution scarcity means the world selects for executors — managers, engineers, operators, people who can get things done.
AI is collapsing execution costs across every domain simultaneously. Code that required a team of engineers takes a single developer with a copilot. Analysis that consumed weeks completes in hours. Content that demanded a studio and a crew now emerges from a laptop. Design, legal review, financial modeling, customer support, marketing copy: the cost curves are all bending in the same direction, at the same time, for the same reason. The asymmetric intelligence described in earlier chapters is already showing up in income statements, in headcounts, in the time between idea and product.
Despite radically cheaper execution, the success rate of new ventures is not improving. Startup failure rates remain stubbornly above 90%. More businesses are being created — 5.5 million new applications in the United States in 2023 alone, a record[1] — but not more successful businesses. The denominator is exploding. The numerator is flat. If anything, the 2022 cohort showed the highest rate of first-year failures in fifteen years: 23.2% gone within twelve months[2].
If execution were the bottleneck, cheaper execution should produce more winners. Instead, it is producing more attempts. More prototypes, more MVPs, more landing pages, more pitches, and roughly the same number of enduring businesses. The bottleneck has shifted. It was execution. Now it is something else.
It is direction.
Not "can you build it?" but "should you build it?" The previous chapter argued that AI is becoming the primary customer in the economy. This chapter extends that argument to its consequence: if AI can execute anything, the scarce resource is knowing what to execute. Direction becomes the last human monopoly.
Every economic era is defined by what is scarce.
In the agrarian era, land was scarce. Whoever controlled the soil controlled the food supply, and whoever controlled the food supply controlled the population. Power mapped to acreage. Kingdoms were measured in hectares. Wars were fought over fertile valleys and river deltas. The feudal world (lords, serfs, manors, tithes) was an adaptation to land scarcity.
In the industrial era, capital was scarce. Machines were expensive. Factories required enormous upfront investment. Whoever could assemble enough capital to build the means of production could employ the labor and capture the surplus. Power mapped to capital. Empires were measured in factories and railroads. The political structure adapted again: banks, corporations, stock markets, labor unions, all organized around the fact that machines cost more than any individual could afford.
In the knowledge era, talent was scarce. Software cost nothing to copy but required rare minds to create. The best engineers, designers, and product thinkers commanded extraordinary premiums because they were the binding constraint on what could be built. Power mapped to cognitive capability. Google, Facebook, Apple: their competitive advantages were not factories or land or even capital. They were people. The war for talent was the defining economic conflict of the first two decades of this century.
Each transition inverted the prior scarcity. Land became abundant when industrial agriculture emerged; a single farmer could feed hundreds. Capital became abundant when financial markets matured; venture capital and credit markets made money the easiest input to acquire. Now talent is becoming abundant as AI systems absorb cognitive labor. The war for talent is ending, not because talent stopped mattering, but because AI is making it abundant.
The scarcity inversion: what was scarce becomes abundant, and what was assumed to be free becomes the bottleneck.
The numbers are specific and accelerating. GitHub's controlled study found that developers using Copilot completed tasks 55% faster than those without it[3]. That was 2023, an early, crude version of what AI coding assistance will become. Subsequent field experiments found more conservative but still significant gains: 8-22% more pull requests per developer, faster cycle times, higher merge rates. AI design tools like Midjourney compress weeks of creative exploration into hours. AI writing tools produce first drafts that would have taken a human writer days. Marketing workflows that required teams of specialists (copywriter, designer, analyst, media buyer) can be orchestrated by a single operator with the right AI stack.
And these are the early numbers. The 2023 tools were to the 2030 tools what the 1995 internet was to the 2010 internet. The trajectory is steep and shows no sign of flattening. Each generation of models is more capable, more general, more autonomous. The 55% productivity gain will look quaint within five years.
The inversion goes beyond efficiency. It changes the structure of the problem space. When execution is expensive, the search space is naturally constrained. You can only pursue a few ideas because each one costs a fortune to test. Scarcity imposes discipline. You think carefully before you build because building is expensive. The cost of execution acts as a filter, brutal but effective. Bad ideas die before they consume resources, simply because no one will fund their execution.
When execution is cheap, the search space explodes. You can build anything. You can test anything. You can spin up a prototype in a weekend and a company in a month. The constraint shifts from "can we build this?" to "should we build this?" And "should" is a harder question than "can." "Can" has a definitive answer. "Should" requires judgment, values, foresight, taste, qualities that no amount of compute can substitute for.
More capability means more options. More options means harder choices. The paradox is structural, not psychological. The choice space has expanded faster than the mechanisms for navigating it. We have 10x the execution capacity and 1x the directional wisdom. The ratio is getting worse every quarter.
The printing press did not make every book a bestseller. It made the question "which book to write" the central challenge of literary life. Before Gutenberg, writing a book was so laborious that the filtering happened at the point of production. Every book was hand-copied by scribes; only works deemed worthy of the effort were reproduced. After Gutenberg, anyone could publish. The filtering had to happen somewhere else: in the minds of readers, in the judgments of critics, in the slow sifting of culture. The press did not solve the problem of what was worth reading. It created that problem.
AI is doing to cognitive output what the printing press did to text: collapsing the cost of production across code, design, analysis, strategy, and operations simultaneously. And as with the press, we have not yet built the filtering mechanisms for an era when everything can be produced.
If direction is the scarce resource, we need a precise language for talking about it. There are beliefs that cost nothing to hold and beliefs that cost everything. There are beliefs that dissolve at the first sign of difficulty and beliefs that survive years of doubt. The distinction is economic.
Conviction is belief backed by sacrifice. Opinion is belief without cost.
The internet made opinions free. Anyone can tweet a take, post a thesis, publish a prediction. Social media made opinions abundant, with billions of people broadcasting beliefs about everything from geopolitics to the price of Bitcoin, from the future of AI to the correct way to raise children. AI will make opinions overwhelming. Large language models can generate plausible-sounding analysis on any topic in seconds. Ask GPT to write a thesis on the future of energy markets and it will produce something that reads like a McKinsey report. The marginal cost of an opinion has collapsed to zero. And a thing that costs nothing to produce is worth nothing as a signal.
Every person has opinions. Few have conviction. The difference is cost.
Columbus spent years lobbying the courts of Portugal and Spain, enduring rejection after rejection, risking his reputation and eventually his life on the conviction that you could sail west to reach the East. He was wrong about the geography (he thought he would reach Asia, not the Americas). But the conviction itself, the willingness to stake everything on a directional bet that contradicted the consensus, was the valuable thing.
Elon Musk put his personal fortune (the $180 million from the PayPal sale[4]) into two companies that every expert said would fail. SpaceX had three consecutive rocket failures. Tesla nearly went bankrupt twice. Every aerospace engineer said reusable rockets were impractical. Every auto analyst said electric vehicles were a niche product for wealthy environmentalists. Musk persisted not because he was smarter than the experts, but because he had conviction where they had opinion. The experts had views. Musk had skin in the game.
Jensen Huang pivoted NVIDIA from gaming GPUs to AI accelerators years before the market rewarded the bet. From 2012 to 2022, NVIDIA invested billions in CUDA, tensor cores, and an ecosystem for accelerated computing while Wall Street analysts questioned why a gaming company was spending so much on a speculative market. Huang's conviction was specific: parallel processing would become the substrate of machine intelligence. Not "AI might be important someday" but "GPU-accelerated parallel compute will be the foundation of the entire AI industry." He was right. But for a decade, he paid the cost of being early — the cost that separates conviction from opinion.
What these examples share is not brilliance. Plenty of brilliant people had the same information and reached different conclusions. What they share is sacrifice. Conviction requires four things that opinions do not.
- {Deep understanding} — not surface familiarity but structural comprehension of a domain. Columbus studied geography and navigation for years. Musk taught himself rocket engineering from textbooks. Huang understood chip architecture at the transistor level. You cannot have conviction about something you do not deeply understand. You can have an opinion. You cannot have conviction.
- {Willingness to be wrong} — conviction is a bet, and bets can lose. The person with conviction accepts this. They know they might be wrong. They proceed anyway. The person with an opinion never has to face this risk. Opinions are free to hold and free to discard.
- {Willingness to bear cost} — money, reputation, time, relationships. Conviction is expensive. Musk nearly went bankrupt. Aschenbrenner left OpenAI. Patel built a business on one man's analysis when he could have taken a safe job at a consulting firm. If it is not expensive, it is not conviction.
- {Persistence through ambiguity} — the period between the bet and the outcome is long and unclear. Conviction must survive the desert of doubt, where evidence is mixed, critics are loud, and the outcome is genuinely uncertain. SpaceX's first three rockets exploded. Tesla's production hell lasted years. NVIDIA's stock was flat for a decade. Conviction that cannot survive ambiguity was never conviction at all.
The economy is drowning in opinion and starving for conviction. Every LinkedIn post has a "hot take." Every podcast host has a thesis. Every AI-generated report has a recommendation. The volume of stated beliefs about the future has never been higher. The quality has not improved. AI makes it trivially easy to produce confident-sounding analysis with zero underlying conviction.
The traditional mechanisms for filtering conviction from opinion are breaking down.
Venture capital requires partners to put money behind conviction, a genuine costly signal. But VC is limited by human bandwidth. A partner can evaluate perhaps ten deals deeply per year. The industry deploys hundreds of billions but through a bottleneck of a few thousand human minds, each with their own biases, pattern-matching heuristics, and social incentives to herd. When one firm invests in AI infrastructure, every firm invests in AI infrastructure. The costly signal of capital is diluted by the cheap signal of imitation.
Markets aggregate conviction through price discovery. Traders put capital at risk. Prices move. Information is revealed. But markets aggregate existing beliefs; they do not discover new directions. The stock market can tell you whether investors believe in Tesla. It cannot tell you whether someone should build the next Tesla. Markets are backward-looking mirrors dressed up as forward-looking windows.
Science verifies conviction through peer review, experimentation, and replication. But scientific discovery is slow and institutionally constrained. The replication crisis suggests this mechanism is fraying: as many as 70% of published results in some fields fail to replicate[5]. And science, by design, is backward-looking. It confirms what has been tested, not what should be tried.
None of these mechanisms scale to the speed of AI-enabled execution. We can now build faster than we can decide what to build. The execution engine runs at machine speed. The direction engine still runs at human speed, mediated by committees, quarterly reviews, partner meetings, and peer review cycles that take months.
Not all conviction is equal. A person can be deeply committed to a belief and deeply wrong. History is littered with confident fools: founders who burned through hundreds of millions on products nobody wanted, generals who marched armies into obvious traps, scientists who defended wrong theories for decades. Conviction alone is not the signal we need. What we need is conviction that has been tested against reality and survived.
Verified conviction: belief that has paid a cost, been specific enough to be falsifiable, and produced a result.
This is a higher standard than mere belief. It requires three elements: specificity (the claim was precise enough to be wrong), cost (something was risked), and outcome (reality rendered a verdict). A single correct prediction proves nothing; stopped clocks are right twice a day. Verified conviction requires a pattern, a track record, a demonstrated ability to see what others missed and to stake something on that seeing.
The same dynamic appears across domains.
Leopold Aschenbrenner was a researcher at OpenAI. In June 2024, he published "Situational Awareness," a 165-page thesis on the trajectory of AI development[6]. The document was remarkable not for its conclusions (many people believe AI will be transformative) but for its specificity. Aschenbrenner laid out timelines for AGI, argued that the national security implications were being underestimated, predicted the shape of the compute buildout with granular detail, and modeled the economic dynamics of the AI transition with a level of precision that most analysts would not risk. Within a year, he had raised $1.5 billion for a hedge fund built on the worldview articulated in that document[7].
The conviction was verified on multiple axes. Specificity: Aschenbrenner did not say "AI will be big." He made claims precise enough to be falsified, with timelines attached. Cost: he left OpenAI, sacrificing a position at the most important AI lab in the world. Track record: his analysis of compute scaling and capability trajectories proved directionally correct, and the capital markets recognized it by allocating $1.5 billion to his judgment[7]. The Collison brothers, Daniel Gross, Nat Friedman: these backers were not sentimental. They allocated capital because Aschenbrenner's conviction had been verified by specificity, cost, and outcome.
Dylan Patel built SemiAnalysis from a solo Substack into the preeminent research firm on semiconductors and AI infrastructure, reaching over 200,000 subscribers[8]. His analysis tracks the semiconductor supply chain with granular specificity: mapping data center construction via satellite imagery, modeling chip architectures at the transistor level, predicting supply-demand dynamics months before the industry recognized them. His conviction was verified by consistently being early and right on calls that mattered: the GPU shortage, the TSMC capacity constraints, the inference cost trajectory, the architecture of next-generation AI chips. Each correct call strengthened the signal. Each was specific enough to have been wrong.
The PayPal Mafia is perhaps the cleanest example of how verified conviction compounds. Peter Thiel, Elon Musk, Reid Hoffman, Max Levchin, and others built PayPal in the late 1990s. Their conviction about internet payments was verified by the company's success: the specific claim that digital payments would become the default mechanism for online commerce. But the more telling point is what happened after: that verified conviction became the seed of dozens of subsequent ventures. Thiel invested in Facebook when social networking was dismissed as a toy. Musk built Tesla and SpaceX when electric cars and private rocketry were considered fantasies. Hoffman built LinkedIn when professional networking online seemed unnecessary. Levchin built Affirm when fintech was not yet a category.
The verified conviction about how the internet would transform commerce gave them a directional advantage that compounded across decades. They had been right once, at cost, with specificity, and that experience calibrated their judgment for the next bet. Verified conviction begets more verified conviction.
Verified conviction is the strongest signal in a noise-filled world. It is not credentials (Aschenbrenner was 22 when he wrote "Situational Awareness"). It is not capital (Patel started with a free Substack). It is not institutional authority (the PayPal Mafia were outsiders building a product that banks laughed at).
What it is: a track record of specific, costly, correct directional bets. A demonstrated ability to see where things are going before the consensus arrives. This is what the economy values when execution becomes abundant.
How do we scale the identification and allocation of verified conviction? The examples above are anecdotal, discovered retroactively, celebrated after the fact. There is no systematic mechanism for finding the next Aschenbrenner before he publishes, the next Patel before his calls land, the next PayPal team before they build the product. The economy allocates verified conviction through accident, social networks, and pattern-matching by a small number of well-connected individuals. It does not have a primitive for it. And without that primitive, direction remains allocated by luck rather than by system.
If direction is the scarcest resource, how does an economy allocate it?
Multiple institutions exist to answer some version of "what should we build next?" Every one of them is inadequate for the world we are entering. They were designed for an era of execution scarcity. They are not designed for an era of direction scarcity.
PREDICTION MARKETS
Polymarket, Kalshi, Manifold: platforms where participants bet real money on the outcome of future events. The mechanism is elegant and remarkably accurate at aggregation. But prediction markets are structurally passive. They aggregate existing beliefs without discovering new directions nobody has thought of. The market answers "will this happen?", not "what should we try?" It is a mirror, not a compass.
VENTURE CAPITAL
The VC industry deployed $368 billion globally in 2024[9]. The mechanism has produced extraordinary outcomes; the entire modern technology sector was financed by venture capital. But VC is limited by human bandwidth (a top-tier partner might evaluate 1,000 pitches per year and invest in 10) and prone to herding. Partners pattern-match against past successes, systematically missing paradigm shifts that do not resemble prior hits.
SCIENTIFIC RESEARCH
Global R&D spending reached nearly $2.9 trillion in 2024[10]. The mechanism (hypothesis, experiment, peer review, publication) has produced the entire edifice of modern knowledge. But the system is slow: peer review takes months to years, and the incentive structure favors incremental work over bold claims. The replication crisis revealed that only 39% of psychology findings could be replicated[11]. The best ideas often come from outsiders, but the funding flows to insiders with track records in established paradigms.
CORPORATE STRATEGY
The largest companies allocate hundreds of billions annually to R&D (Apple $30 billion[12], Google $45 billion[13]). But the innovator's dilemma is real and persistent: existing revenue streams bend internal decision-making toward defending the present rather than discovering the future. The corporation optimizes for what it already does, because that is what the quarterly earnings call rewards.
Four mechanisms. Trillions of dollars in aggregate. And a common failure mode: none of them efficiently discovers, verifies, and funds directional conviction at scale.
Prediction markets aggregate but do not discover. Venture capital discovers but does not scale. Science verifies but does not move at market speed. Corporate strategy allocates but cannot escape its own gravity. Each mechanism solves part of the problem. None solves the whole.
The gap is structural. There is no existing mechanism that takes a novel directional insight ("this is what should be built, and here is why") and subjects it to rapid, scalable, verifiable testing while simultaneously allocating resources to the most promising directions. We have components of this system scattered across institutions. We do not have the system itself. The pieces exist. The integration does not.
The gap is widening. AI is accelerating execution speed while the direction mechanisms remain stuck at human speed. The engine is revving higher and higher, and everyone is steering with the tools of the last era, tools built for an era when the engine was the constraint, not the steering wheel.
In computer science, a "primitive" is a fundamental building block from which more complex operations are constructed. Addition is a primitive. Memory allocation is a primitive. Socket connections are a primitive. You do not rebuild these from scratch each time you write a program — you rely on them as infrastructure, as given, as substrate. The power of modern software is built on layers of abstraction, each layer providing primitives that the layer above takes for granted.
The modern technology stack is built on a hierarchy of primitives that would have seemed miraculous twenty years ago. We have primitives for compute — AWS, Azure, GCP provide processing power on demand, anywhere in the world, at any scale, billed by the second. We have primitives for storage — databases, object stores, file systems that hold exabytes without breaking. We have primitives for communication — APIs, webhooks, message queues that let any system talk to any other system. We have primitives for payment — Stripe processes transactions with a few lines of code, handling currency conversion, fraud detection, and regulatory compliance invisibly. We have primitives for identity — OAuth, Auth0, Passkeys that let users authenticate across services without building auth from scratch.
We do not have a primitive for direction.
No infrastructure layer takes a directional claim ("this should be built, and here is the evidence") and subjects it to systematic discovery, verification, and resource allocation. Each organization, each entrepreneur, each researcher must solve the direction problem from scratch, using ad hoc combinations of intuition, market research, social proof, and luck. It is as if every company that wanted to process a payment had to build its own banking infrastructure.
This is equivalent to the state of computing before cloud infrastructure. In the early 2000s, every company that wanted to build a web application had to provision its own servers, manage its own data centers, handle its own scaling. It was expensive, slow, and wasteful. Startups spent months on infrastructure before writing a line of product code. Then AWS launched EC2 in 2006, and compute became a primitive. You could get a server in minutes instead of months. The entire application layer exploded because the infrastructure layer was solved. Instagram scaled to millions of users with a team of thirteen. That would have been impossible five years earlier.
Direction needs the same transformation, because the AI economy cannot function without it. When execution is a primitive, when building things is as easy as calling an API, the only remaining question is what to call the API for. Direction becomes the rate-limiting step. Every other layer is scaling. This layer is not.
What would a direction primitive look like? It would need four properties:
- {Discovery} — a mechanism for surfacing novel directions that no one has yet articulated. Not aggregating existing beliefs (that is what prediction markets do) but generating new hypotheses about what should exist. A system that can identify gaps, opportunities, and possibilities that are not yet in the discourse.
- {Verification} — a mechanism for testing directional claims through costly signals. Not opinions or votes, but skin in the game. Money staked, reputation wagered, effort invested. Conviction that has paid a price. The cost is the filter. Without cost, the signal is noise.
- {Allocation} — a mechanism for routing resources toward verified direction. Connecting the signal to capital, talent, and compute that can act on it. Discovery without allocation is academic. Allocation without discovery is bureaucratic. The primitive must do both.
- {Feedback} — a mechanism for closing the loop. Results that verify or falsify the original direction. Track records that accumulate over time. Credibility that is earned through outcomes, not assumed through credentials. The loop must be tight, fast enough to keep pace with the execution layer.
What we are describing is a direction engine. Not a prediction network (Polymarket), not a funding mechanism (VC), not a research institution (the university). A system that discovers, verifies, and activates directional conviction as a primitive operation, a substrate on which the AI economy can build the way the application layer builds on compute, storage, and payment primitives. No such system exists today. But the economy is converging on the need for one.
What would a direction engine actually require? The architecture would need to combine elements of prediction markets (costly signals), the scientific method (verification through outcomes), and venture capital (resource allocation based on conviction), operating at machine speed without the bandwidth constraints of human gatekeepers.
The components would look like this.
First, costly commitment. Participants would put something at stake (money, reputation, computational effort) behind specific, falsifiable directional claims. Not "AI will be big" but "LLM inference costs will drop below $0.001 per thousand tokens by Q3 2026." Not "crypto will change finance" but "on-chain derivatives volume will exceed centralized exchange volume for at least one asset class within 18 months." The specificity is the filter. Claims that cannot be verified are opinions, not direction.
Second, track records. The system would track which participants generate verified conviction over time. Not a single correct prediction (anyone can get lucky once) but a pattern of consistent, specific, early directional calls that resolve correctly. These track records would create a new kind of meritocracy: merit measured not by credentials or capital, but by verified conviction. Did you see it coming, did you stake something on it, and were you right?
Third, resource routing. Verified direction would connect to allocation. When someone with a strong track record makes a new directional claim, the system routes capital, compute, and talent toward that direction with confidence proportional to the track record. Calibrated confidence, based on empirical evidence of directional accuracy.
Fourth, tight feedback. Every directional claim eventually resolves. The technology ships or it doesn't, the market emerges or it doesn't, the cost drops or it doesn't. Track records update. Credibility adjusts. The system learns which kinds of thinkers are right about which kinds of questions. A semiconductor analyst may have extraordinary insight into chip architectures and none into consumer behavior. The feedback loop builds a map of directional expertise, not directions alone.
No institution today combines all four. Prediction markets have costly commitment and feedback but no resource routing. Venture capital has resource routing and track records but operates at human speed. The scientific method has verification but takes years per cycle. Government grants have allocation but no skin-in-the-game filter.
The gap is structural. Over time, it produces a continuously widening mismatch between the execution capacity of the system and the quality of direction it receives. More capability deployed in worse directions. More power with less wisdom.
Strip away everything AI can do. Automate execution, analysis, coordination, communication, computation, creation. Assume AI does everything that can be specified, measured, and optimized. What is left?
Four things remain that humans provide and that AI cannot generate without human input. They are not labor, products, or services. They are the parameters of the optimization itself.
Direction. What to optimize for. AI can optimize any function you give it, but it cannot choose the function. It can find the fastest route to any destination, but it cannot choose the destination. Direction is a commitment, not a computation. It emerges from values, from lived experience, from the irreducible fact of caring about something. You can train a model on every strategic document ever written, and it will produce excellent strategy memos. It will not produce a reason to care about the outcome. The memo is execution. The caring is direction.
Values. Whose wellbeing matters. How to weigh competing interests. What tradeoffs are acceptable and which are intolerable. AI can model the consequences of any value system with extraordinary precision: what happens if you optimize for GDP growth, for carbon reduction, for human longevity, for maximum individual freedom. It cannot tell you which to optimize for. It can illuminate the tradeoffs with perfect clarity but cannot make the tradeoff.
Taste. What is beautiful. What is meaningful. Taste is not preference; preferences can be learned from data, predicted from behavior, optimized through A/B testing. Taste is the capacity to distinguish between the adequate and the extraordinary, the functional and the inspired. The difference between a house and architecture, between a product and a craft, between a technically perfect pop song and a song that makes you weep. AI can generate infinite variations. A human with taste can select the one that matters — and more importantly, can articulate why it matters in a way that reshapes what everyone else considers possible.
Legitimacy. Consent to be governed by a system. Even if an AI system could determine optimal allocation, optimal governance, optimal social organization, perfectly calibrated to maximize human flourishing by every measurable metric, the system requires human consent to be legitimate. A world governed by an AI without human assent is a prison, no matter how optimal the conditions. The cell may be luxurious. It is still a cell. Legitimacy is granted, by humans, to systems they choose to trust. And it can be revoked.
These four things share a common feature: none are traditional economic outputs. They are inputs to the optimization function, the parameters that tell the execution layer what to do, how to weigh outcomes, what counts as success, and whether the whole enterprise has the consent of the governed.
The economy of the Sybilian condition is an economy of direction, not production. What you produce matters less than what you aim at. The factory matters less than the blueprint. The code matters less than the specification.
"Will AI take our jobs?" Yes, AI will take execution jobs, because it does them better, faster, cheaper, and at scale. The data analyst, the copywriter, the paralegal, the programmer writing boilerplate, the financial analyst building models: these roles are defined by execution, and they were defined by a scarcity (cognitive execution) that no longer exists.
But AI will not take direction jobs. The entrepreneur who decides what company to build. The researcher who decides what question to investigate. The artist who decides what to express. The leader who decides what values to prioritize. These are direction roles, and they are the only roles that remain scarce when the Sibyl can execute anything.
The question for individuals shifts accordingly. It is no longer "what can you do?" but "do you know what should be done?" Not skill but judgment. Not capability but conviction. That is the scarce human contribution in the Sybilian condition.
Direction is scarce because choosing is hard. Not computationally hard. Existentially hard.
To choose a direction is to close other directions. Every commitment is a sacrifice of every alternative you did not choose. In a world of limited options, this sacrifice is small. You can only do a few things anyway. The opportunity cost is bounded by your own capacity. When building a company takes five years and all your money, you choose carefully, not because you are wise, but because the constraint forces wisdom upon you.
In a world of infinite optionality, where AI reduces the activation energy of any endeavor to near zero, commitment feels like an irrational act. Why choose one direction when you could pursue all of them? Why commit to a single company when you could prototype ten? Why stake your reputation on one thesis when you could hedge across many?
This is the paradox of abundance applied to direction itself. When everything is possible, nothing is chosen. When every path is open, no path is walked with enough persistence to reach the destination. The infinite option space produces paralysis, or worse, frenetic motion without progress. And the economy stalls, not from lack of capability but from lack of commitment.
The cultural signs are already visible. The rise of "multi-hyphenate" identities: founder-investor-creator-advisor-podcaster-angel, committed to nothing, dabbling in everything. The proliferation of "stealth mode" startups that never launch because launching would mean choosing: a market, a customer, a feature set, and thereby excluding all the other markets, customers, and features that remain theoretically possible. The epidemic of strategic optionality in corporate boardrooms, where every decision is deferred because keeping options open feels safer than committing to any one of them.
But direction without commitment is opinion. And the world drowns in opinion.
The core insight of the direction thesis is existential. The Sybilian condition requires a new kind of human: one who can choose better, who can hold conviction in the face of infinite alternatives, who can commit when commitment is no longer forced by scarcity but must be freely chosen in spite of abundance.
For all of human history, scarcity provided a scaffold for commitment. When you could only afford to build one thing, you built it with conviction — not because you were inherently decisive, but because the constraint made the decision for you. You committed to a career because switching was expensive. You committed to a community because moving was hard. Remove the constraint, and the decision falls entirely on the individual. The weight of choosing transfers from circumstance to character.
Kundera understood this, even writing in a different context. In the Sybilian condition, the power is not political. It is the power of infinite possibility to dissolve commitment, to make every choice feel provisional, every conviction feel like one option among infinite equally plausible alternatives. The forgetting is the slow erasure of conviction by optionality, the gradual loss of the ability to say "this matters more than that" when everything is equally accessible.
The struggle is to remember what you believe — to hold a direction in your mind when every force in the environment conspires to scatter your attention across a thousand equally plausible alternatives. To choose, and to keep choosing the same thing, day after day, when the cost of switching is zero and the temptation to switch is infinite.
The scarcity of direction is a scarcity of will. Of commitment. Of the willingness to choose one future and sacrifice all the others. Technology alone cannot solve it, because it is a problem of human commitment once external constraints no longer force commitment for us.
This is the most fundamental form of scarcity: not a scarcity of things, but a scarcity of direction.
In a world where anything can be built, the only question that matters is: what should be?