SYBIL
CHAPTER VII

Computed Prices

The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.
Friedrich Hayek, 1988

Hayek wrote this near the end of his life, as the Soviet Union was collapsing. It was a victory lap. The socialist calculation debate was over, and he had won.

For sixty years, economists had argued about whether central planning could work. Hayek's answer was definitive: no. Not because planners were corrupt or lazy, but because the problem was computationally intractable. The information required to allocate resources optimally was distributed across millions of minds, encoded in tacit knowledge, revealed only through the act of exchange itself. No planner could gather it. No computer could process it. The market was the only possible solution, not merely the better one.

He was right. For his era.

The conditions that made his argument definitive have changed.

II. THE MIMETIC TRAP

The mimetic dynamics of Chapter III (prices encoding narrative as much as value) compound the problem. Mimetic pricing is a hack for the absence of computation. We guess because we cannot calculate. We imitate because we cannot derive. The market is a distributed approximation algorithm for a problem no single node can solve.

What happens when a single node can solve it?

III. ALREADY COMPUTED

Computation has already replaced the guessing game across much of the economy.

Between 60 and 73% of all US equity trading volume is now algorithmic[1]. Not assisted by algorithms; executed by them. The human trader at the center of every economics textbook has been largely automated out of price-discovery for public equities. The prices you see on a stock ticker are not the output of human judgment. They are the output of competing algorithms, each running its own model of what the price should be, each executing trades in microseconds. The mimetic game continues, but the players are no longer human.

Algorithmic trading is the dominant mode of price formation in the world's largest capital markets. The floor traders are gone. The pit is empty. The humans who remain in the process are writing algorithms, not making trades. They are programmers, not price-discoverers.

Algorithmic trading has also changed the informational content of prices themselves. When a human trader buys a stock, the trade carries information about that trader's beliefs, research, and judgment. When an algorithm buys a stock, the trade carries information about the model's parameters, training data, and optimization objective. The price still aggregates information, but the information is now second-order. It is information about models of the world, not information about the world directly. The mimetic regress has acquired a new layer. Algorithms guess what other algorithms will do, based on models of what those algorithms are modeling.

Uber's surge pricing replaced politically negotiated flat rates with an algorithm that computes ride prices in real-time based on demand, supply, traffic, and weather. The price of a ride from Midtown to JFK changes minute by minute. No human sets it. People complain about surge pricing, but they get rides when it rains.

Amazon takes this further. The Buy Box — the default purchase option on any product page — captures approximately 82% of all Amazon sales[2]. Who wins the Buy Box is determined by an algorithm that evaluates price, fulfillment method, seller history, and dozens of other variables. The result: Amazon's marketplace processes an estimated 2.5 million price changes per day. Sellers who do not use algorithmic repricing tools cannot compete. The price is no longer what a merchant decides to charge. It is what an algorithm calculates as optimal for conversion.

Airlines pioneered this decades ago — a single route can see over 100,000 fare changes per day, with the fare you pay determined entirely by what the revenue management model calculates you will pay.

Insurance is replacing the actuarial table (a statistical hack for the absence of individual data) with behavioral pricing computed from telematics, biometrics, and real-time risk models.

The entire $600 billion global digital advertising market runs on computed prices[3]. Google's ad auction runs billions of times per day, computing what each impression is worth in the 300 milliseconds between page request and page load.

What unifies all of these examples is a pattern that economics has not yet fully absorbed: the transition from price discovery to price computation. Discovery implies uncertainty, negotiation, emergence — agents groping toward equilibrium through repeated exchange. Computation implies determinism, optimization, derivation — a system calculating the equilibrium directly from inputs. The textbook model still assumes discovery. The economy increasingly runs on computation.

Look at the pattern. In every domain where data is abundant and transactions are frequent, computed prices have replaced or are replacing human-set prices. The holdouts (real estate, bespoke services, art) are domains where data is scarce, transactions are infrequent, or the good itself resists standardization. The frontier of computation is advancing steadily into these domains too, but more slowly.

Nobody mandated algorithmic pricing. The transition happened because computed prices are better. Every sector that adopted them did so for competitive reasons: firms that computed prices outperformed firms that guessed them. Convergent evolution. Same selection pressure, same solution.

Prices are already computed. The question is whether we will acknowledge what this means for the frameworks we use to understand economies. Our textbooks still describe price formation as if it were 1950: human agents, making deliberate choices, reaching equilibrium through repeated exchange. The reality is that most prices, by volume and frequency, are now computed by algorithms that no human fully understands, optimizing objectives that no consumer chose, at speeds that no regulator can monitor.

IV. THE CALCULATION

The socialist calculation debate hinged on a specific claim: that the information required for optimal allocation could not be centralized.

Hayek's argument had three components:

THE KNOWLEDGE PROBLEM

Relevant information is dispersed across millions of actors. Each farmer knows his soil. Each consumer knows her preferences. Each engineer knows his factory's capabilities. This knowledge is local, contextual, often tacit — difficult or impossible to articulate, let alone transmit.

THE AGGREGATION PROBLEM

Even if local knowledge could be transmitted, no central processor could aggregate it. The computational burden of synthesizing millions of data streams into coherent allocation decisions exceeds any planner's capacity.

THE INCENTIVE PROBLEM

Even if aggregation were possible, actors have no incentive to reveal their true information to a planner. They will lie, exaggerate, conceal — gaming the system for personal advantage. The market solves this by making revelation incentive-compatible: you reveal your preferences by paying for them.

Each problem was real. Each made central planning fail. The Soviet Union did not collapse because of bad intentions; it collapsed because Gosplan hit all three walls simultaneously.

Hayek won because he correctly identified the constraints.

Now examine those constraints under Sybilian conditions.

V. THE KNOWLEDGE PROBLEM, DISSOLVED

The farmer's knowledge of his soil is no longer tacit.

Sensors measure moisture content, nitrogen levels, pH balance, microorganism activity. Satellites track crop health through spectral imaging. Weather stations provide hyperlocal forecasts. The data that once existed only in the farmer's intuition now exists in databases, updated in real-time, accessible to any system with the right permissions.

The consumer's preferences are no longer hidden.

Every purchase is logged. Every click is tracked. Every search query reveals intent. Every social media post signals taste. The preference functions that once existed only in individual minds now exist as behavioral data — not perfect, not complete, but far more legible than anything Hayek imagined.

The engineer's knowledge of his factory is no longer local.

Digital twins model every machine, every process, every bottleneck. IoT sensors report status continuously. Predictive maintenance algorithms anticipate failures before they occur. The tacit knowledge that once lived in experienced workers' heads is increasingly encoded in systems that monitor, learn, and optimize.

The world is becoming readable not because someone decided to read it, but because readability is more efficient than opacity. Legibility as infrastructure, not surveillance as side effect. Businesses instrument their operations because instrumented operations are cheaper to run. Consumers accept tracking because tracked consumers get better recommendations. The incentives point toward transparency.

The scale of this externalization is worth quantifying. The number of connected IoT devices is projected to exceed 30 billion by 2030. Every connected device is a sensor converting local, tacit, contextual information into transmissible data. A smart thermostat converts "this room feels cold" into a temperature reading, a schedule pattern, and an energy consumption profile. A warehouse RFID system converts "the forklift operator knows where things are" into a precise location database. A wearable health monitor converts "I don't feel well" into heart rate variability, blood oxygen, and sleep quality data. The aggregate effect is the progressive dissolution of the knowledge problem — not its abolition, but its retreat.

The knowledge problem assumed that local knowledge could not be externalized. That assumption is failing. Tacit knowledge still exists, human intuition still matters, but the direction is clear. Each year, more of what was hidden becomes visible. More of what was tacit becomes explicit. More of what was local becomes global. Complete legibility is a limit approached, never reached. The question is whether enough knowledge can be externalized to make computation competitive with markets as a coordination mechanism. In many domains, we have already crossed that threshold.

The Sibyl can see what no planner could see.

VI. THE AGGREGATION PROBLEM, DISSOLVED

The computational burden that once made central planning impossible has shrunk by orders of magnitude.

Modern AI systems already process at this scale. A large language model trains on trillions of tokens — the sum of human written knowledge. A recommendation system processes billions of user interactions daily. A high-frequency trading system makes millions of decisions per second.

This was true when processors were human brains augmented by adding machines. It is not true when processors are GPU clusters capable of exaflop-scale computation. When Oskar Lange proposed using computers to solve the socialist calculation problem in 1965[4], the world's most powerful computer performed roughly 3 million operations per second. Today, a single Nvidia H100 GPU performs nearly 2 quadrillion operations per second in reduced precision, a factor of roughly 700 billion. The gap between the computational requirement and the computational capacity has inverted. The bottleneck is no longer computation. It is data quality, model architecture, and objective specification.

Hayek's deeper point was that the information was not merely voluminous but complex: interdependent, dynamic, context-sensitive. Prices work because they compress this complexity into a single number that encodes everything relevant to a transaction.

The Sibyl does not need to compress. It can model the complexity directly.

Modern AI systems already demonstrate this capability at smaller scales. Supply chain optimization algorithms model millions of interdependencies (suppliers, logistics, demand patterns, inventory levels) and compute allocation decisions that outperform human planners. By orders of magnitude, not by a little. Google's DeepMind reduced the energy used for cooling its data centers by 40% simply by modeling the interdependencies that human engineers could not hold in their heads simultaneously. This is a microcosm of the aggregation problem solved: thousands of variables, continuous interaction, dynamic adjustment, computed rather than approximated.

Financial models price complex derivatives by simulating thousands of variables and their interactions. Weather models predict atmospheric behavior by integrating data from millions of sensors. Protein-folding algorithms compute molecular structures that eluded human scientists for decades. Each of these is a case where the complexity of the problem once made it intractable for centralized computation and is now routine for it. The aggregation problem was always relative to computational capacity. The capacity has changed. The problem, in many domains, has not.

These are exactly the kind of interdependent, dynamic, context-sensitive problems that Hayek claimed were unsolvable by central computation. They are being solved.

The aggregation problem assumed bounded computation. The bound is lifting.

VII. THE INCENTIVE PROBLEM, TRANSFORMED

Actors lie to planners. They conceal information, exaggerate needs, underreport capacity. This was the death of Soviet planning — not the absence of data, but the corruption of data. Everyone had incentive to game the system.

Markets solve this through incentive compatibility. You reveal your preferences by paying for them. You cannot claim to value something highly while refusing to pay for it. The price mechanism forces honest revelation.

The Sybilian condition does not eliminate the incentive problem, but it transforms it in ways Hayek could not have anticipated. His framework assumed that information must be voluntarily disclosed. In a world of pervasive sensors and behavioral data, much of the relevant information is emitted involuntarily, as a byproduct of ordinary activity.

First: behavioral revelation. You may lie about your preferences, but your behavior reveals them. The Sibyl does not need to ask what you want; it observes what you do. Click patterns, purchase history, time allocation, attention flow. Revealed preference at scale, inferred from action rather than stated. Netflix does not ask you what kind of movies you like. It watches what you watch, how long you watch it, when you pause, when you abandon. The resulting preference model is more accurate than anything you could self-report, because you do not fully know your own preferences. The system knows your behavior better than you know your motivations.

Second: reduced stakes. Much of Soviet gaming was about resource allocation: factories exaggerating needs to secure larger quotas. The famous Soviet nail factory that produced either millions of tiny nails or one enormous nail (depending on whether the quota was measured by quantity or weight) was not an irrational actor. It was a rational actor in an incentive environment that rewarded gaming. In a system of material abundance (which cheap energy and robotics enable), the stakes of misallocation drop. If production is cheap, overproduction is less costly. If adjustment is fast, mistakes are quickly corrected. The incentive to game declines when the reward for gaming shrinks.

Third: reputation systems. In a fully legible economy, deception is harder to sustain. Your history is visible. Your patterns are known. Gaming one interaction is possible; gaming a lifetime of recorded behavior is not. The Sibyl remembers.

This does not eliminate gaming. Humans will find new ways to deceive. But the equilibrium shifts. The cost of deception rises. The benefit falls. The incentive problem does not disappear; it shrinks.

The market solved the incentive problem through a mechanism, price, that was itself vulnerable to manipulation. Insider trading, market manipulation, cornering, wash trading — the history of markets is a history of gaming the price mechanism. The incentive problem was never fully solved. It was managed, through regulation, enforcement, and the hope that competition would discipline bad actors. The Sybilian condition does not solve the incentive problem either. But it changes the attack surface. Gaming a price is relatively easy; you just trade. Gaming a behavioral model that has been trained on millions of observations of your actual behavior is harder. Not impossible, but harder.

VIII. THE CALCULATED EQUILIBRIUM

What does it mean to compute prices?

Not to abolish markets. Markets remain useful: they reveal preferences through action, enable distributed experimentation, allow dissent from central allocation. But the Sibyl transforms what markets do.

In the mimetic regime, prices are outputs of a guessing game. Agents guess what other agents will guess. The market aggregates guesses into a number. The number may be wildly disconnected from any physical or economic reality: Dogecoin, tulip bulbs, subprime mortgages.

In the calculated regime, prices become checkable. This is the critical distinction. Not that prices are set by a planner (that was the Soviet model, and it failed). But that prices, wherever they emerge from, can be compared against a computed benchmark. The benchmark says: given what we know about supply, demand, costs, constraints, and preferences, the efficient price for this good is X. The market says the price is Y. The gap between X and Y is information — either about the model's ignorance or the market's irrationality.

The Sibyl can compute what a price "should" be given physical constraints, preference data, and optimization objectives. Not "should" in a moral sense — "should" in an engineering sense. Given these inputs, this is the price that clears the market while satisfying these constraints.

The market price can then be compared to the calculated price. When they diverge, it signals something: either the Sibyl's model is missing information (which the market has), or the market is mispricing (which the Sibyl can detect).

This creates a new dynamic: computed arbitrage. If the Sibyl calculates that an asset is mispriced, capital flows to correct the misprice. Not through human traders guessing at fundamentals, but through algorithms executing on calculation. The mimetic game continues — humans still guess, still speculate, still bet — but it operates on top of a calculated substrate that anchors prices to something other than pure expectation.

Markets become a discovery layer, not the foundation. They find information the Sibyl doesn't have — new preferences, new possibilities, new errors in the model. But the baseline allocation is computed. The market perturbs; the Sibyl stabilizes.

This is a description of what already exists, in embryonic form, in every sector where algorithmic pricing operates alongside human markets. The stock market has a computed substrate (algorithmic trading) and a discovery layer (human speculation). Ride-hailing has a computed substrate (surge pricing algorithms) and a discovery layer (riders choosing whether to pay). The pattern is general. The Sybilian condition simply extends it to the entire economy.

IX. THE PLANNING PARADOX

Many of the largest organizations in the world already rely on central computation for allocation. They just call it something else.

No commissar decides how many shoes to produce. Instead:

  • Sensors report inventory levels across every retail location
  • Algorithms predict demand based on behavioral patterns
  • Supply chains auto-adjust production to match prediction
  • Prices flex dynamically to clear local imbalances
  • The whole system optimizes continuously, without human intervention

This is planning. A unified intelligence coordinates the whole. But it does not feel like planning. It feels like a market: prices moving, goods flowing, choices being made.

Amazon already operates this way internally. It does not use market prices to allocate goods within its network. It uses optimization algorithms. Warehouses, trucks, inventory, labor: all coordinated by computation, not by internal markets. The result is more efficient than any market could achieve, because the planner (Amazon's systems) has near-complete information about the relevant variables.

Amazon is a planned economy embedded in a market economy. And it is winning.

Amazon's annual revenue exceeds $611 billion. Its logistics network spans over 1,000 fulfillment centers, hundreds of aircraft, and tens of thousands of delivery vehicles. Internally, this is a command economy. No price mechanism allocates goods from warehouse to truck to door. An algorithm decides. The allocation is centrally planned, and it delivers packages in under 24 hours to most of the US population. Gosplan could not dream of this efficiency. The difference is information, not ideology.

Walmart's supply chain tells the same story. The company's Retail Link system gives suppliers real-time access to sales data, allowing production to respond to demand signals within hours. Walmart does not wait for the market to adjust. It computes the adjustment. The result: inventory turns that legacy retailers cannot match, and margins that come not from higher prices but from lower waste.

The US Department of Defense manages roughly $3.5 trillion in assets (supply chains, bases, equipment, personnel) across every continent. Military logistics is the most complex allocation problem in the world, and it has never been solved by markets. It is solved by planning: optimization models, forecasting algorithms, centralized command. The military does not hold auctions for ammunition. It plans. And it has been doing so, with increasing computational sophistication, for decades.

China is running the most ambitious experiment in computed allocation since Gosplan, but with fundamentally different tools. The social credit system, the surveillance infrastructure, the state-directed investment in AI: these are not ornaments on a market economy. They are the scaffolding of a computed economy. Whether it works is an open question. That it is being attempted is not.

The pattern is consistent across these examples. Every large organization that achieves a certain information density transitions from market coordination to computational coordination internally. Firms are islands of planning in a sea of markets; this was Ronald Coase's insight in 1937[5]. What has changed is the size of the islands. As information technology lowers the cost of internal coordination, the optimal firm boundary expands. Amazon, Walmart, the DOD: these are not anomalies. They are the leading edge of a structural shift in the boundary between plan and market.

"Market" and "plan" were never opposites. They were different solutions to the same problem: coordination under uncertainty. Markets solve it through distributed guessing. Plans solve it through centralized calculation. Each works better under different constraints.

Under the old constraints (symmetric intelligence, lossy information) markets dominated. The distributed solution outperformed any achievable centralized solution.

Under Sybilian constraints (asymmetric intelligence, complete information) the calculus shifts. Not toward Soviet-style planning, which failed for reasons beyond computation. But toward something new: calculated markets, where prices are computed and then tested, where allocation is optimized and then verified, where the guessing game plays out on top of a substrate that already knows the answer.

The United States does not talk about planning. It talks about optimization, efficiency, automation. The vocabulary is different. The direction is the same. The aggregate of private planning systems (Walmart's supply chain, the Fed's models, Google's traffic routing) is beginning to resemble a planned economy. Not by design, but by convergence.

X. THE LIMITS OF COMPUTATION

If computation is dissolving Hayek's constraints, perhaps the Sibyl can calculate everything. Perhaps the market is simply obsolete. This is the error that killed Soviet planning: the belief that sufficient intelligence eliminates the need for distributed discovery.

It does not. The Sibyl has limits, and they are structural, not temporary. Understanding where these limits lie is the prerequisite for building systems that actually work. The Soviet Union failed not because its planners were humble but because they were not humble enough. They believed that computation, at sufficient scale, could replace discovery. It cannot. But the reasons it cannot are specific, not general, and they map onto identifiable features of economic life.

The Lucas Critique. In 1976, Robert Lucas showed that econometric models break down when they are used to make policy[6]. The reason: agents change their behavior in response to the model's predictions. If a model predicts inflation, actors hedge against inflation, which changes the inflation rate, which invalidates the model. The observation changes the observed.

Goodhart's Law is the pithy version: when a measure becomes a target, it ceases to be a good measure. This is a structural feature of reflexive systems, where the agents being modeled are aware of and respond to the model. Every economic system is reflexive. The Sibyl cannot escape this by being smarter. A smarter model just produces faster reflexive loops. The agents adapt to the new model, which requires a newer model, which triggers further adaptation. The arms race between model and agent is permanent. The Sibyl can win individual rounds. It cannot end the war.

Campbell's Law in education: when test scores become the metric, schools teach to the test. Google's search algorithm: when page rank becomes the target, an entire SEO industry springs up to game it. Credit scores: when FICO becomes the gatekeeper, consumers optimize for score rather than creditworthiness. The measure, targeted, ceases to measure.

Prediction markets illustrate both the power and the limit. Polymarket and its predecessors have outperformed polls, pundits, and models in forecasting elections, geopolitical events, and economic indicators. They aggregate dispersed beliefs efficiently. They put money behind conviction. They work until they don't. Prediction markets failed spectacularly on Brexit. They mispriced Trump 2016. They struggle with tail risks and correlated failures. The very efficiency of the aggregation mechanism can create false confidence: because the market price looks precise, participants treat it as more certain than it is. Precision is not accuracy.

Markets are not passive observers. They are participants. A prediction market that gives a candidate 70% odds of winning changes the behavior of donors, voters, and the candidate herself. The prediction alters the predicted. This is not a solvable engineering problem. It is a feature of any system that is simultaneously measuring and influencing the same variable. The Sibyl can model this feedback loop. It cannot escape it. Each meta-model produces a new loop that requires a further meta-model. The regress does not terminate.

Black swans. Nassim Taleb showed that our models systematically underestimate the frequency and impact of rare events. COVID-19 was not predicted by any model, despite decades of pandemic preparedness planning. The 2008 financial crisis was not predicted by the risk models that were specifically designed to prevent it. The fall of the Soviet Union was not predicted by the intelligence agencies that existed specifically to predict it.

These are failures of model scope, not insufficient computation. The models could not predict these events because the events emerged from dynamics outside the model's boundaries. No model can include everything. Every model has an edge, and beyond that edge lie the events that will eventually break it. The Sibyl can push the edge further. It cannot eliminate it.

The 2021 Suez Canal blockage (six days, 12% of global trade frozen, cascading shortages for months) was not a black swan in the existential sense. Ships run aground. But it was a black swan for every model that assumed canal throughput as a constant. The assumption was invisible until it failed.

The irreducibility of novelty. Computation operates on existing possibility spaces. It can explore, optimize, and combine within those spaces with superhuman efficiency. What it cannot do is expand the space itself. The iPhone was the creation of an entirely new category of human-device interaction, not an optimal solution to a problem anyone was computing. Penicillin was an accident that only became meaningful because Alexander Fleming noticed something no model was looking for.

No optimization of the existing financial system would have produced Bitcoin. Bitcoin is a reconceptualization, not an optimization. A sufficiently powerful optimizer would have produced better payment rails and faster settlement. It would not have produced a decentralized, trustless, deflationary digital asset. That required imagining a category that no model contained.

Venture capital is a mechanism for funding exactly this kind of novelty. In 2024, approximately $170 billion was deployed globally into early-stage companies, bets on futures that do not yet exist in any dataset. The VC model is notoriously inefficient: most investments fail, returns follow a power law, and the most successful outcomes are precisely the ones that were least predictable ex ante. This is not a bug. It is the point. VC works as a discovery mechanism precisely because it funds things that cannot be computed. It is a market for the irreducible.

The distinction between optimization within a known space and the creation of new spaces is the difference between efficiency and growth. An economy that only optimizes will converge toward a fixed point: the best possible allocation of existing resources for existing preferences. An economy that also discovers will expand its possibility frontier. Both matter. But the mechanisms that produce them are fundamentally different, and confusing one for the other is the central risk of the computed-price paradigm.

The Sibyl can optimize the known. It cannot discover the unknown. It can price existing goods with superhuman accuracy. It cannot price goods that do not yet exist, for preferences that have not yet formed, in categories that have not yet been invented.

Innovation, the creation of genuinely new things, is the engine of long-run economic growth. If computed prices crowd out the messy, wasteful, unpredictable process of discovery, the system becomes optimally efficient at producing the wrong things. It perfects the present at the expense of the future.

The Soviet Union's economy was, in many respects, impressively optimized for its chosen objectives. It produced enormous quantities of steel, concrete, and military hardware. What it could not produce was novelty: new consumer goods, new services, new ways of living. The optimization was excellent. The objective was wrong. And because the system lacked a discovery mechanism (markets, entrepreneurship, the freedom to fail), it could not self-correct. It optimized itself into a corner. Any system of computed prices must grapple with this precedent. The computation is only as good as the objective. And the objective is only as good as the discovery process that informs it.

XI. THE HAYEK PROBLEM

Most people get Hayek wrong. The more fundamental objection deserves its own section.

The standard reading of Hayek, the one this chapter has so far engaged with, is computational. The knowledge problem is about data that cannot be centralized. The aggregation problem is about computation that cannot be performed. The solution is the price system, which performs both functions through decentralized exchange. Under this reading, the Sibyl dissolves the problem by providing sufficient data and sufficient computation.

Hayek's deepest insight was about the nature of knowledge itself, not computation.

In "The Use of Knowledge in Society" (1945)[7], Hayek distinguished between scientific knowledge (the kind that can be written down, transmitted, and aggregated) and knowledge of "the particular circumstances of time and place." The latter is not merely difficult to transmit. It is constituted by the act of engagement. The entrepreneur who senses an opportunity does not have data that a sensor could collect. She has a judgment formed by years of immersion in a specific context, a network of relationships, a feel for what is possible. This is not information waiting to be digitized. It is understanding that exists only in the doing.

This shifts the critique from computation to epistemology. The standard techno-optimist response to Hayek ("we just need more data and better algorithms") misses the point. Some preferences do not exist as data prior to the exchange process. They are brought into being by the market itself. The price of a house in a neighborhood is more than an aggregation of existing preferences about that neighborhood. It is partly constituted by the act of pricing; other buyers' interest creates desirability that did not exist before anyone expressed interest. The market generates the signal, not merely reads it.

Hayek's argument was not merely that planners lacked sufficient data. It was that certain knowledge does not exist outside the market process itself. Prices do not just aggregate information; they generate it. The act of exchange creates knowledge that did not exist before the exchange.

When a bidder at an auction discovers what she is willing to pay, she learns something about her own preferences that she did not know before. The willingness to pay $500 for a painting was not a data point sitting in her mind, waiting to be read by a sensor. It was created in the moment of confrontation: the painting, the room, the competing bidders, the adrenaline, the judgment call. The price that emerges from this process is not an aggregation of pre-existing values. It is the output of a creative act.

Subjective preferences are not static, scannable objects. They are dynamic, contextual, and often constructed in the act of choosing. You do not know what you want until you see what is available. You do not know what you will pay until you are asked to pay. The preference is not prior to the market. The market generates the preference.

Tacit knowledge compounds the problem. Michael Polanyi (whom Hayek read and cited) argued that "we know more than we can tell."[8] The master craftsman cannot fully articulate what makes one joint strong and another weak. The experienced doctor reaches a diagnosis through pattern recognition that resists formalization. These are not information processing failures. They are features of embodied, situated intelligence that may be fundamentally non-computable, not in the mathematical sense, but in the practical sense that no amount of sensor data captures what the body knows.

The counterargument: AI systems are increasingly capable of learning tacit knowledge from behavioral data. AlphaGo learned to play Go not by being told the rules of strategy but by observing millions of games and discovering patterns that no human had articulated. Medical AI systems diagnose diseases from imaging data with accuracy that matches or exceeds experienced radiologists. Perhaps tacit knowledge is not non-computable. Perhaps it is merely not-yet-computed, waiting for sufficient data and sufficient model capacity.

This counterargument has force, but it has limits. AlphaGo operates in a closed system with fixed rules. The economy is an open system with evolving rules. A medical AI diagnoses from images, a fixed mapping from input to output. But the entrepreneur's judgment is not a fixed mapping. It is a creative synthesis of context, relationship, timing, and possibility that changes the system it operates within. Training data cannot capture what has not yet happened. The deepest forms of tacit knowledge are not patterns in historical data. They are intuitions about the future, and the future, by definition, is not in the training set.

So: where does this leave the argument?

It leaves us with a map. Not a binary (computed vs. uncomputed) but a spectrum. Some prices are highly computable. Others are not. The question is where any given domain falls on this spectrum, and the answer depends on the nature of the knowledge involved.

WHERE COMPUTED PRICES WORK

Commodities, logistics, standardized goods, financial derivatives, utility pricing, insurance for measurable risks. These are domains where the relevant knowledge is largely explicit, quantifiable, and transmissible. Copper futures, shipping routes, electricity load balancing: the Sibyl can price these better than any market, because the variables are known and the optimization is well-defined.

WHERE COMPUTED PRICES PARTIALLY WORK

Consumer goods, real estate, labor markets, healthcare. Here the knowledge is a mix of explicit data and tacit judgment. Algorithms can narrow the range (a Zestimate gets you within 5% of a home's sale price) but the final number depends on contextual, subjective, and relational factors that resist full computation. The Sibyl can bound the price. The market sets it.

WHERE COMPUTED PRICES FAIL

Art, novel goods, relationships, experiences, meaning-laden objects. What is the computed price of a first-edition Hemingway? Of a date at a particular restaurant? Of a startup with no revenue and a wild idea? These prices are constituted by subjective experience, social context, and irreducible novelty. The Sibyl has no objective function to optimize because the objective is itself the output of the process.

Hayek was right about the nature of knowledge but wrong about the scope of his conclusion. He argued that because some knowledge is tacit, contextual, and generated by exchange, central computation could never coordinate an economy. The actual picture is less absolute: most economic activity, by volume and value, involves knowledge that is increasingly computable. The domains where Hayek's insight bites hardest (novelty, meaning, subjective preference) are real and important, but they are the edge of the economy, not its bulk.

By transaction volume, commodity markets, logistics, standardized financial instruments, and routine consumer purchases account for the vast majority of economic activity. These are computationally tractable. The irreducibly human domains (art, novel ventures, bespoke services, relationship-embedded exchange) are small by volume but disproportionately important for meaning, culture, and innovation. The Sybilian economy is one where the computable majority is computed, freeing human attention and market energy for the incomputable minority. This is the distillation of markets: concentrated on the problems they are uniquely suited to solve, unburdened from the routine allocation work that computation does better.

The Sibyl computes the core. The market discovers the edge. Both are necessary. Neither is sufficient.

XII. WHAT REMAINS

The Sibyl cannot compute everything.

It cannot compute novelty. New goods, new preferences, new possibilities: these must be discovered, not calculated. The entrepreneur who invents a product no one knew they wanted is doing something the Sibyl cannot do: creating value that did not exist in the optimization space.

It cannot compute meaning. A wedding ring is worth more than its material cost not because the market is irrational, but because the ring encodes a relationship, a promise, a shared history.

It cannot compute ethics. Maximize GDP? Minimize suffering? Equalize outcomes? Preserve freedom? These are not computational questions. They are political questions about what kind of world we want to live in.

It cannot compute trust. You buy from the local hardware store not because its prices are optimal but because you trust the owner's advice. Reputation algorithms approximate trust. They do not replicate it.

Markets encode a kind of ethics: individual choice, voluntary exchange, decentralized power. A price that emerges from voluntary exchange carries legitimacy that a computed price does not. You may dispute the market outcome, but you participated in it. A computed price is imposed, however optimally. The consent problem is real.

Calculated for whom? Every objective function has winners and losers. "Maximize total welfare" favors the many at the expense of the few. "Minimize worst-case outcome" favors the disadvantaged at the expense of efficiency. The choice between these is not technical. It is political, in the deepest sense.

XIII. THE NEW ECONOMICS

Economics will not disappear, but its questions will change. The discipline was built to explain how decentralized agents, acting on local information, produce coordinated outcomes without central direction. The invisible hand. Spontaneous order. General equilibrium. The entire apparatus was constructed to explain how markets work.

When markets are no longer the primary coordination mechanism for most of the economy, much of this apparatus becomes historical, interesting for understanding the past, but not sufficient for navigating the present.

Old economics asked: how do prices emerge from decentralized exchange? How do markets aggregate distributed information? How do we understand equilibrium in systems no one controls?

New economics asks: how do we set objectives for calculated systems? How do we verify that computed allocations match intended outcomes? How do we preserve discovery and dissent within optimized systems? How do we maintain the capacity for novelty within a framework optimized for efficiency?

They require a new vocabulary. The old vocabulary (supply and demand, equilibrium, market clearing) assumed that prices emerge from the interaction of autonomous agents. The new vocabulary must describe a world where prices are computed by systems, contested by agents, and verified against objectives. The economics of the Sybilian era is closer to control theory than to classical price theory. It is about feedback loops, objective specification, stability constraints, and the management of reflexive systems.

The Sibyl solves the coordination problem. It does not solve the preference problem. It can compute any equilibrium, but we must choose which equilibrium to compute.

It is the central political question of the Sybilian era: who programs the objective function?

In the old regime, the market answered this question by aggregating individual choices. No one chose the outcome; everyone chose their piece of it. The result was emergent, unplanned, often unjust, but it was no one's fault. This was the market's deepest political virtue: deniability. Bad outcomes were nobody's decision. They were emergent properties of a system that nobody controlled.

In the new regime, the outcome is chosen. Someone (some entity, some process, some coalition) decides what the Sibyl optimizes for. The result is deliberate. If it is unjust, someone is responsible. This is the end of economic deniability, and it changes the relationship between economics and politics permanently. When the allocation was emergent, politics was about shaping incentives and hoping for good outcomes. When the allocation is computed, politics is about choosing the allocation directly. The stakes are higher. The accountability is clearer. The fights will be uglier.

Hayek lost on the computational question. But his deeper question is more urgent now than in his time: who decides? When the market decided, the answer was "everyone and no one." When the Sibyl decides, the answer must be specific. And specificity demands accountability, contestation, and a political process adequate to the stakes. We do not yet have such a process. Building one is the real challenge of the Sybilian era, harder than the computation itself, and more consequential.

The Demon can compute the price of anything. It cannot compute the price of everything. The objective function is not a price. It is a choice.

We have outsourced calculation. We cannot outsource choice.

ENDNOTES

  1. [1]JPMorgan, market structure research, 2023. Algorithmic trading accounts for 60-73% of US equity trading volume.
  2. [2]Repricer.com, "Why 82% of Sales Come From the Amazon Buy Box," 2024.
  3. [3]eMarketer, "Worldwide digital ad spend will top $600 billion this year," 2024.
  4. [4]Oskar Lange, "The Computer and the Market," published posthumously 1967. Originally proposed in 1965.
  5. [5]Ronald Coase, "The Nature of the Firm," Economica, Vol. 4, No. 16, November 1937.
  6. [6]Robert Lucas, "Econometric Policy Evaluation: A Critique," Carnegie-Rochester Conference Series on Public Policy, 1976.
  7. [7]Friedrich Hayek, "The Use of Knowledge in Society," American Economic Review, Vol. 35, No. 4, September 1945.
  8. [8]Michael Polanyi, The Tacit Dimension, University of Chicago Press, 1966.