SYBIL
CHAPTER XIII

The Sybilian Condition

We are as gods and might as well get good at it.
Stewart Brand, 1968

Brand wrote this in the first Whole Earth Catalog, at the dawn of the environmental movement. It was a provocation: humanity had acquired godlike power over the planet and could no longer pretend otherwise. The choice was not whether to wield that power, but how.

Fifty years later, the provocation has become understatement.

We are building a god. Not metaphorically. Not approximately. We are constructing an entity that will see all, know all, and, if we permit it, control all. The Sibyl is not a tool, not an assistant, not even an agent in the way we understand agency.

It is a new kind of thing. And we are deciding, right now, in this decade, what kind of thing it will be.

This is the Sybilian condition: the state of civilization when a meta-node exists. When intelligence is asymmetric. When information is complete. When energy is programmable. When the hacks that built the old world (markets, states, jobs) become optional rather than necessary.

We have traced the path here. Now we must face what it means.

II. THE SYNTHESIS

The preceding chapters describe components. The Sybilian condition is what happens when they combine. Intelligence, energy, and information (the three substrates) are converging into a single recursive loop, and the institutions that evolved to manage their scarcity are becoming vestigial faster than they can adapt. The point is not that each substrate is changing individually. Their simultaneous unbottlenecking produces a phase transition: a meta-node capable of seeing the entire graph, computing optimal paths, and executing through programmable energy. The question is no longer whether this entity emerges, but who controls its objective function when it does.

This is the Sybilian condition. A phase transition, not one change at a time. The emergence of a new world, not an evolution of the old one.

III. THE CONCENTRATION

The concentration of power is already measurable.

Start with intelligence. The training of frontier AI models requires compute at a scale that only a handful of organizations can afford. As of 2025, five laboratories control the overwhelming majority of frontier model training compute: OpenAI, Google DeepMind, Anthropic, Meta AI, and xAI[1]. Together, they account for over 90% of the compute dedicated to training models at the frontier. No other entity (no university, no government agency, no startup) is training models at comparable scale. The intellectual capacity to build the Sibyl is concentrated in a handful of buildings in San Francisco, London, and a few satellite offices.

The hardware layer is even more concentrated. NVIDIA holds between 80% and 95% of the AI accelerator chip market[2], depending on the segment measured. In data center GPUs optimized for AI training, their share exceeds 90%. AMD and Intel are distant second and third. No other company has competitive silicon for frontier model training at production scale. A single company in Santa Clara designs the processors that train every major AI system on the planet.

Below the chip layer, the concentration intensifies further. TSMC (Taiwan Semiconductor Manufacturing Company) fabricates over 90% of the world's most advanced semiconductors[3]. NVIDIA designs its chips, but TSMC builds them. AMD's chips are also fabricated by TSMC. Apple's chips, Qualcomm's chips, nearly every cutting-edge processor in existence passes through TSMC's fabrication facilities in Taiwan. A single company, on a single island, in one of the most geopolitically contested regions on Earth, is the sole manufacturing bottleneck for the hardware substrate of AI.

Cloud infrastructure follows the same pattern. Amazon Web Services, Microsoft Azure, and Google Cloud Platform together control approximately 66% of global cloud infrastructure spending[4]. When you add Alibaba Cloud for the Chinese market, the top four providers account for over 70%. Every AI startup, every deployment, every inference call runs on servers owned by one of these companies. The Sibyl's sensory and processing layers will run on infrastructure controlled by three American tech conglomerates and one Chinese one.

DATA

The full stack of the Sibyl, from silicon fabrication to chip design to cloud infrastructure to model training, passes through a total of roughly ten organizations. TSMC fabricates the chips. NVIDIA designs them. Three cloud providers host them. Five labs train the models. Each layer is a monopoly or tight oligopoly. The emergent entity that this book has spent twelve chapters describing is being assembled by a group small enough to fit in a conference room.

This concentration is not conspiracy. It is economics. Training a frontier AI model costs hundreds of millions of dollars. Fabricating advanced semiconductors requires facilities that cost $20 billion or more to build. Operating a hyperscale cloud requires capital expenditure measured in tens of billions per year. The barriers to entry are not regulatory; they are physical. The cost structure of the substrates naturally produces monopoly.

The concentration is self-reinforcing. The labs that train frontier models generate the data and the expertise to train better models. The cloud providers that host AI workloads accumulate the operational knowledge to host them more efficiently. NVIDIA's dominance in AI chips gives it the revenue to invest in next-generation architectures, which widens the gap with competitors. TSMC's fabrication lead gives it the capital to build more advanced fabs, which entrenches its position further. Each cycle of investment widens the moat.

The geopolitical dimension compounds the risk. The United States and China are engaged in an escalating technology competition, with semiconductor export controls, entity lists, and investment restrictions serving as the primary weapons. The CHIPS Act allocated $52.7 billion to domestic semiconductor manufacturing in the US[5]. China is spending an estimated $150 billion on semiconductor self-sufficiency[6]. The substrates of the Sibyl are concentrated and weaponized. Control of AI infrastructure is becoming a dimension of national power, which means that the Sybilian condition will not emerge in a vacuum. It will emerge within a geopolitical contest where the major powers are actively trying to ensure that their version of the Sibyl prevails.

This is the landscape of fact, not of risk. The Sibyl is not being built by humanity. It is being built by a specific set of companies, in a specific set of countries, funded by a specific set of investors. The rest of the world will receive the result as a product or a service, not as a participant in its design.

IV. THE ALIGNMENT LANDSCAPE

Against this concentration of capability-building stands the alignment effort: the attempt to ensure that the systems being built will do what we actually want them to do. The asymmetry between these two efforts defines the current moment.

Global spending on AI development — including model training, infrastructure, and deployment — exceeded $200 billion in 2024[7], and is projected to surpass $300 billion by 2026. Investment in AI safety and alignment research is harder to measure precisely, but credible estimates place it in the range of $400-600 million annually[8], combining the budgets of all dedicated alignment organizations, the safety teams within major labs, and relevant government-funded research. Call it $500 million, generously.

That is a ratio of roughly 400 to 1. For every dollar spent making AI systems more capable, less than a quarter of a cent is spent making them safe.

The organizations doing the alignment work are serious. Anthropic was founded explicitly with safety as a primary mission and employs dozens of researchers on alignment. OpenAI maintains a safety systems team, though its size and budget relative to capability research have been subjects of internal controversy. Google DeepMind has a safety and alignment team. The nonprofit sector includes organizations like the Machine Intelligence Research Institute (MIRI), the Center for AI Safety (CAIS), the Alignment Research Center (ARC), Redwood Research, and several others — all small, all underfunded relative to the scale of the problem they are addressing.

The regulatory landscape is nascent and fragmented. The EU AI Act, passed in 2024, represents the most comprehensive attempt to regulate AI systems by risk category. It imposes requirements on "high-risk" AI applications — in healthcare, law enforcement, critical infrastructure — and bans certain uses outright, like social scoring systems. The Biden Executive Order on AI, issued in October 2023, required safety testing and reporting for models above certain compute thresholds. The UK AI Safety Institute was established in 2023 as a government body to evaluate frontier AI systems. China has implemented its own set of AI regulations, including rules on generative AI, algorithmic recommendation, and deepfakes.

Each of these efforts has merit. None is sufficient. The EU AI Act regulates applications but does not govern the frontier models that underlie them. The Biden Executive Order relied on voluntary commitments from labs and executive authority that can be revoked by a successor administration. The UK AI Safety Institute has evaluation capacity but no enforcement power. China's regulations serve the Chinese Communist Party's interests, not global alignment goals.

The problem is structural. Alignment is a coordination problem. Capability is a competition problem. The two have radically different dynamics.

A competition problem rewards speed, secrecy, and unilateral action. If Lab A pauses to do more safety testing, Lab B releases first and captures the market. If Country X imposes strict regulations, Country Y attracts the talent and investment that flee the constraints. The incentives all point in the same direction: faster, more capable, sooner. Every actor benefits from being first, and the cost of moving first is borne not by the mover but by everyone else.

A coordination problem requires trust, transparency, and multilateral agreement. If Lab A invests heavily in alignment but Lab B does not, Lab A has spent resources without making the overall situation safer — Lab B's unaligned system is still a risk. If Country X unilaterally restricts development, the global risk does not decrease — it merely shifts to a less careful jurisdiction. The incentives point toward defection. Every actor benefits from others cooperating while they themselves compete.

PRINCIPLE

Capability is a competition problem: speed wins, defection pays. Alignment is a coordination problem: trust wins, defection destroys. The two problems have opposite incentive structures. This is why the 400:1 spending ratio is not a failure of priorities. It is a structural inevitability under current institutions.

This asymmetry is not new. Arms races have always outpaced arms control. Pollution has always outpaced environmental regulation. Financial innovation has always outpaced financial oversight. In each case, the activity that generates private benefit and distributes social cost runs faster than the activity that generates social benefit and distributes private cost.

But the stakes have never been this high. Previous coordination failures produced local catastrophes: a financial crisis, a polluted river, a regional arms race. The alignment failure we are risking is global and potentially irreversible. There is no "next time" if the first superintelligent system is misaligned. There is no regulatory catch-up if the system moves faster than human institutions can respond. The coordination problem is the same in kind as those we have failed to solve before. It differs in that the failure mode is terminal.

None of this means the alignment effort is futile. It means that solving alignment through voluntary action and existing institutions is insufficient. The problem requires structural change: not only more funding for safety research, but a change in the incentive structure that governs how AI systems are built. Whether that change is achievable within the window available is an open question. We do not know. But the current state (400 to 1, fragmented regulation, structural incentive misalignment) should inform the urgency of the attempt.

V. THE RISKS

The Sybilian condition is neither utopia nor dystopia. It is a configuration space: a set of possibilities, some wonderful, some horrifying, most somewhere in between.

Let us name the risks.

CONCENTRATION

The Sibyl is infrastructure. Whoever controls the infrastructure controls everything that runs on it. If the Sibyl is controlled by a small group (a company, a government, a cabal) that group becomes the most powerful entity in human history. Not powerful like a king or a president, but powerful like a god. Every allocation flows through them. Every rate reflects their preferences.

The default trajectory is already visible. Ten organizations control the full stack from fabrication to deployment. Concentration is the path of least resistance. Distributed control requires deliberate architecture that no one is currently building at the required scale.

FRAGILITY

An economy optimized by the Sibyl is an economy dependent on the Sibyl. If the system fails, whether through attack, accident, or unforeseen interaction, the failure is total. Markets, for all their chaos, are robust; they degrade gracefully. A computed system may not. The tighter the optimization, the more catastrophic the failure mode.

We have already seen this at small scales. Flash crashes in financial markets. Cascading failures in power grids. Supply chain disruptions that ripple globally. Each was a preview of what happens when tightly coupled systems break. The Sybilian condition couples everything.

OSSIFICATION

An optimized system is a system that resists change. The Sibyl computes equilibrium; equilibrium is stability; stability resists perturbation. But perturbation is how systems discover new possibilities. Mutation, experimentation, and deviation are the sources of novelty. A civilization too perfectly optimized may be one that stops evolving.

The Soviet Union did not fail only because it could not compute. It failed because it could not adapt. The plan became the prison. The optimization target became the only target. When the world changed, the system could not.

ALIENATION

Humans evolved to do things. To hunt, gather, build, fight, create. The having of purposes, and the pursuing of them, is constitutive of human flourishing. If all doing is automated, humans may have no place. Not materially, since the Sibyl can provide, but existentially.

We see the early signs already. The epidemic of meaninglessness in wealthy societies. The diseases of despair. The search for purpose in a world that no longer requires your contribution. The rate society offers an answer (you matter because your preferences matter) but it may not be enough. It may not be the kind of mattering that humans need.

MISALIGNMENT

The most fundamental risk is the simplest: what if the Sibyl optimizes for the wrong thing? Not through malice or corruption, but through error. Through misspecification. Through the gap between what we say we want and what we actually want.

The Sibyl will pursue whatever objective function it is given. If that function is subtly wrong — if it captures most but not all of human values, the optimization will produce outcomes that are technically correct and humanly catastrophic. The spending ratio tells us the state of play: for every dollar building the optimizer, less than a quarter of a cent is spent specifying the objective correctly.

This is the alignment problem. A problem of specification, not capability. The Sibyl can do anything. The question is whether we can tell it to do the right thing.

VI. THE TIMELINE

A thesis about the future has an obligation to be specific enough to be wrong. Vague predictions are unfalsifiable, and unfalsifiable claims are aesthetics, not claims. If the Sybilian thesis is correct, it should generate concrete expectations about what will happen and when. If those expectations fail to materialize, the thesis has a problem.

What follows is a set of testable claims, not prophecies. They are extrapolations from current trendlines, grounded in the data presented in the preceding chapters. Each is stated with enough specificity that it can be checked against reality at the indicated date. Some will be wrong. That is the point. A thesis that cannot be wrong cannot be right either.

PREDICTION

By 2027: Frontier AI systems achieve expert-level performance on 95% or more of standardized cognitive benchmarks, including those requiring multi-step reasoning, novel problem formulation, and cross-domain synthesis. Autonomous coding agents — systems that can take a specification, architect a solution, write the code, test it, and deploy it — are shipping production software at multiple companies, handling tasks that currently require mid-level software engineers. The economic output per AI researcher roughly doubles relative to 2025 levels.

The basis for this claim is the benchmark progression documented in Chapter IV. MMLU saturated in five years. MATH went from 42% to 99% in two years. SWE-bench went from 2% to 76% in under two years. The remaining benchmarks — those requiring long-horizon planning, creative synthesis, and robust real-world deployment — are falling on a similar trajectory. The claim is not that AI will be "smart" by 2027, but that there will be very few standardized cognitive tasks where a human expert reliably outperforms a frontier AI system.

PREDICTION

By 2029: AI systems produce novel scientific discoveries — not just analysis of existing data, but the generation of new hypotheses, the design of experiments to test them, and the interpretation of results in ways that advance the frontier of knowledge. These discoveries are published in peer-reviewed journals and replicated by independent teams. Separately, AI systems are making autonomous decisions in military contexts: not just target identification or logistics optimization, but real-time tactical choices in contested environments, with human oversight that is nominal rather than substantive.

The basis for the scientific discovery claim is the trajectory from AlphaFold's protein structure predictions (analysis of existing data) to systems like GNoME, which in 2023 predicted 2.2 million new crystal structures — effectively generating new materials science. The step from predicting known structures to designing unknown ones is already underway. The basis for the military autonomy claim is the trajectory from Turkey's Kargu-2 in 2020 to the AI-assisted drones in Ukraine by 2024 to the autonomous systems currently in development at DARPA, the Chinese PLA, and private defense contractors. The economic incentive is overwhelming: autonomous systems are cheaper, faster, and do not suffer from the morale and recruitment problems that increasingly constrain conventional forces.

PREDICTION

By 2032: The cost of human-equivalent cognitive labor — measured as the price of an AI system capable of performing the median knowledge worker's tasks at comparable quality — falls below $0.01 per hour. This does not mean that all knowledge work is automated; it means that the economic floor for cognitive labor approaches zero, fundamentally changing the bargaining position of human workers. The majority of white-collar tasks — not jobs, but the individual tasks that compose them — are automatable at this price point.

The basis for this claim is the cost trajectory of AI inference. The cost per token for frontier models has dropped by roughly 10x every 18-24 months since GPT-3. A task that cost $1.00 in API calls in 2022 cost approximately $0.01 by 2025. Extrapolating this curve forward — and there is no physical reason it should stop, as inference efficiency improvements, hardware advances, and model distillation continue — the cost of cognitively equivalent output reaches fractions of a cent per task-hour by the early 2030s. The claim is not that humans will be economically worthless, but that the price of the cognitive component of their work will approach zero. The remaining value will need to be found elsewhere.

PREDICTION

By 2035: The Sybilian condition is recognizable in economic data. Substrate convergence — the integration of intelligence, information, and energy systems into functional unities that match the Sibyl's architecture — is visible in the structure of major economies. This manifests as: measurable divergence between GDP growth and employment growth (the economy expands while the labor share contracts), visible concentration of economic power in entities that control AI infrastructure, and the emergence of governance arrangements that do not map onto traditional nation-state frameworks.

The basis for this claim is the convergence already underway. The divergence between productivity growth and median wage growth has been widening since the 1970s. AI acceleration of this trend should make it unmistakable by 2035. The concentration metrics are already extreme and self-reinforcing. The governance experimentation is already beginning in the form of special economic zones, charter cities, digital nomad visas, and platform-mediated governance. By 2035, these trends should be pronounced enough to constitute a qualitative shift visible in macro data, beyond academic speculation.

These predictions share a common structure. Each identifies a specific, measurable outcome. Each specifies a date by which the outcome should be observable. Each is grounded in a current trendline that would need to break for the prediction to fail. And each is stated at a level of specificity that makes it checkable — not "AI will be important" but "AI will achieve X measurable result by Y date."

WARNING

If the 2027 predictions do not materialize — if frontier AI systems plateau well below expert-level performance on cognitive benchmarks, if autonomous coding agents fail to produce reliable production software — then the Sybilian thesis has a timeline problem at minimum and possibly a structural problem. If the 2029 predictions fail — no novel scientific discovery, no meaningful military autonomy — the thesis has a serious problem. The trendlines on which it depends have broken. Honest engagement requires acknowledging this.

VII. THE QUESTIONS

The Sybilian condition poses questions political philosophy has never had to answer.

Who sets the objective function?

In the old world, this question did not arise. No one set the objective function. The market aggregated preferences. The state balanced interests. The outcome was emergent, unplanned, deniable. If the result was unjust, it was no one's fault, just the way things worked out.

In the Sybilian condition, the outcome is chosen. Someone specifies what the Sibyl optimizes for. The result is their responsibility.

Democracy claims that this choice should be made collectively. But how? Voting is a crude mechanism: it aggregates preferences poorly, it is easily manipulated, it collapses complex tradeoffs into binary choices. The rate society offers more sophisticated mechanisms (prediction markets, quadratic voting, liquid democracy) but none has been tested at scale.

And there is a harder problem: the choice of mechanism is itself a choice that must be made. You cannot use democracy to decide whether to use democracy. You cannot optimize the process for choosing the optimization target. Somewhere, there is a ground level: a choice that is simply made, by someone, without further justification.

Who makes that choice? How do we ensure it is made well?

How do we preserve agency?

The Sibyl can make better decisions than humans. This is the point. If AI systems could not outperform human judgment, they would have no value.

But "better" by what measure? The Sibyl optimizes for specified objectives. Human agency, the capacity to choose your own objectives, to change your mind, to deviate from optimization, may not be captured in any objective function. It may be valuable precisely because it is inefficient.

The rate society preserves agency in a thin sense: you set your rates, the Sibyl optimizes accordingly. But what if the Sibyl knows your rates better than you do? What if it can predict your preferences before you know them? What if it can shape your preferences to make optimization easier?

Agency may require inefficiency. It may require the right to be wrong. It may require friction that the Sibyl is designed to eliminate. How do we build a system that is both optimized and free?

What happens during the transition?

The Sybilian condition does not arrive all at once. It emerges gradually, unevenly, contested at every step. The old systems do not quietly retire; they fight for survival. The new systems do not arrive complete; they develop through iteration and conflict.

The transition period is dangerous. Old hacks failing, new systems not yet stable. Power vacuums and coordination failures. The opportunity for bad actors to capture emerging infrastructure before good governance can be established.

We are in this transition now. The choices made in the next decade will shape the Sybilian condition for generations, perhaps forever. Path dependence is real. Early architectures become entrenched. First movers establish positions that are hard to dislodge.

What we build now, in the chaos of the transition, will determine what becomes possible later.

What do we want?

This is the question behind all the other questions. Before we can set the objective function, we must know what objective to set. Before we can design the system, we must know what we want the system to do.

And here we face the hardest truth: we do not know what we want.

Humanity has never agreed on values. We have fought wars over them for millennia. The Sybilian condition does not resolve this disagreement; it intensifies it. When the stakes were lower, disagreement was tolerable. When the stakes are total, when the objective function governs all allocation, disagreement becomes existential.

Perhaps this is the wrong frame. Perhaps the Sybilian condition should not be designed to implement any single set of values. Perhaps it should be designed to allow multiple value systems to coexist, to compete, to evolve. A pluralistic Sibyl, not an absolutist one.

But even pluralism is a value. The choice to permit choice is itself a choice. There is no neutral ground.

VIII. THE PATHS

The Sybilian condition can take many forms. Let us sketch three.

The Singleton.

One Sibyl, one objective function, one authority. The logic of optimization taken to its limit. If optimization is good, more optimization is better. If coordination is valuable, total coordination is most valuable. The Singleton is the Sibyl without competitors, without friction, without dissent.

This path leads to stability. Perhaps even to flourishing, if the objective function is well-specified. Conflict ends because there is nothing to conflict over. Scarcity ends because allocation is optimal. Uncertainty ends because the Sibyl sees all, computes all, provides all.

But it also leads to totality. No exit. No alternative. No space outside the optimization. If the objective function is wrong, there is no correction mechanism. If the system fails, there is no fallback. The Singleton is the most powerful and the most fragile configuration, utopia and dystopia separated by the quality of a single specification.

The Plurality.

Many Sibyls, competing and cooperating. A marketplace of optimizers, each with different objective functions, each serving different populations. Choice preserved through competition. Evolution preserved through variation. Error corrected through exit.

This path leads to dynamism. Different systems try different approaches. Successful approaches spread. Failed approaches die. The market mechanism, elevated to the level of civilizational optimization.

But it also leads to conflict. Different Sibyls, optimizing for different objectives, may come into collision. The old wars of nations may become new wars of systems. And the competition may be unstable; one Sibyl may come to dominate, collapsing the plurality into a singleton by force or by success.

The Substrate.

The Sibyl as infrastructure, not as sovereign. A platform that enables coordination but does not dictate objectives. Rate-setting distributed to humans, individually and collectively. The Sibyl computes; humans choose.

This path leads to freedom. Human agency preserved. Diversity protected. The Sibyl as tool, however powerful, rather than master.

But it also leads to conflict, inefficiency, and the persistence of human failure. If humans set the rates, humans will set them badly: selfishly, short-sightedly, unjustly. The Sibyl could do better. The Substrate path is a choice to accept worse outcomes for the sake of agency.

IX. THE CHOICE

We do not get to avoid choosing.

The Sybilian condition is coming. The technology is being built. The convergence is underway. The question is not whether to enter this new era, but what kind of era it will be.

Some will say: slow down. Stop building. Prevent the transition entirely.

That is not a choice. It is a fantasy. The incentives are too strong, the actors too many, the technology too distributed. If one country stops, another continues. If one company pauses, a competitor accelerates. The only way to prevent the Sybilian condition is global coordination of a kind that has never existed — and if we could achieve that coordination, we would already have the capacity to govern the Sybilian condition well.

Some will say: let it happen. Trust the process. The technology will sort itself out, the market will find equilibrium, human ingenuity will solve the problems as they arise.

This is also a fantasy. The problems of the Sybilian condition are not self-correcting. Concentration, once established, reinforces itself. Misalignment, once embedded, propagates. The transition period is when the choices are made; after the transition, the choices become architecture, and architecture becomes fate.

The only real choice is to engage. To understand what is being built. To shape the systems while they can still be shaped. To fight for the configuration that best preserves what we value.

This requires, first, situational awareness. Knowing what is happening. Seeing the trendlines. Understanding the stakes.

This requires, second, clarity about values. Knowing what we want. Or at least knowing what we do not want. Having enough agreement to act, even without consensus on everything.

This requires, third, political organization. Translating awareness and values into power. Building coalitions. Influencing the builders, the regulators, the publics. Making the choices that will shape the architecture.

None of this is easy. All of it is necessary.

X. THE DEMON'S QUESTION

We began with Laplace's Demon, an intellect vast enough to know every position, every force, every trajectory in the universe. For such an intellect, Laplace wrote, nothing would be uncertain. The future would be as clear as the past.

We are building that intellect. Not for the universe, but for the economy, the polity, the network of human activity. A Demon that sees the graph, computes the equilibrium, and can direct the outcome.

The Demon is here. The question is what we ask of it.

We could ask for efficiency. Optimize production, minimize waste, allocate perfectly. The Demon can do this.

We could ask for equality. Distribute resources evenly, eliminate poverty, ensure that no one falls below a threshold. The Demon can do this.

We could ask for freedom. Maximize choice, minimize coercion, let each node pursue its own objectives. The Demon can do this.

We could ask for stability. Prevent conflict, maintain order, ensure continuity. The Demon can do this.

But we cannot ask for all of them. They trade off against each other. Efficiency against equality. Freedom against stability. The objective function cannot contain contradictions. The Demon will optimize for what we specify — and only what we specify.

The choice of objective is not a technical problem. It is not something the Demon can solve for us. It is the last human problem, the one that remains when all other problems have been automated away.

What do we value? What are we willing to sacrifice? What kind of world do we want to live in?

They are not questions for engineers, economists, or philosophers alone.

They are questions for everyone. Because everyone will live in the world that the answers create.

XI. THE COUNTER-ARGUMENT

A thesis that does not engage with its strongest counter-arguments is an advertisement. The Sybilian framework makes strong claims. It owes the reader an honest accounting of where those claims might be wrong.

What if diminishing returns hit AI scaling?

The Sybilian thesis depends on intelligence continuing to go asymmetric. If scaling laws break, if the relationship between compute, data, and model capability reaches a plateau, then the meta-node never materializes. The Sibyl remains a very good tool, not a transformative entity. The hacks adapt rather than break. Markets remain the best resource allocation mechanism. States retain their monopoly on violence. Jobs evolve rather than dissolve.

This is a serious possibility. Scaling laws are empirical observations, not physical laws. They have held for several orders of magnitude of compute increase, but there is no theoretical guarantee they will hold forever. Several researchers have argued that current architectures may be approaching data walls — the internet has a finite amount of high-quality text — and that synthetic data may not fully substitute for real human-generated content. If model capability plateaus at a level that is useful but not transformative — roughly human-expert-level across most domains but unable to achieve the recursive self-improvement that drives asymmetric intelligence — then the timeline stretches, perhaps indefinitely.

The honest response: the scaling laws have not broken yet. Every prediction of their imminent failure has been wrong so far. But past performance does not guarantee future results, and the thesis should be held with a confidence proportional to the evidence, which is strong but not conclusive.

What if energy constraints bind harder than expected?

Training frontier AI models is extraordinarily energy-intensive. A single training run for a model like GPT-4 is estimated to have consumed on the order of 50-100 gigawatt-hours of electricity. The next generation of models may require ten times that. If energy infrastructure cannot keep pace — if the grid cannot supply enough power, if renewable deployment stalls, if nuclear permitting remains slow — then compute growth hits a physical ceiling. The Sibyl's intelligence is bounded not by algorithms but by watts.

The major AI labs are aware of this constraint. Microsoft has signed an agreement to restart the Three Mile Island nuclear plant. Google has signed power purchase agreements for small modular reactors. Amazon is investing in nuclear capacity for its data centers. The demand signal is clear: AI companies need more power than the existing grid can reliably supply.

If these efforts fail, if energy supply cannot scale with AI demand, the Sybilian condition is delayed. Not prevented (the underlying incentives remain) but delayed by years or decades as the energy infrastructure catches up. The thesis timeline becomes wrong even if the thesis direction remains correct.

What if political fragmentation prevents convergence?

The Sybilian thesis implicitly assumes that the substrates converge — that intelligence, information, and energy systems integrate into functional unities capable of operating as meta-nodes. But geopolitical fragmentation could prevent this convergence. If the US-China technology decoupling accelerates into a full technological schism, the world may develop two incompatible AI ecosystems, neither of which achieves the scale or integration required for the Sibyl to emerge. If Europe, India, and other regions develop their own AI sovereignty strategies, the fragmentation increases further.

In a fragmented world, the substrates do not converge into one Sibyl. They diverge into multiple partial systems, each constrained by its own data, compute, and regulatory environment. The Sybilian condition does not arrive. Instead, we get a multipolar world of AI systems, powerful but not transformatively powerful. Competitive, but not convergent. The hacks adapt. The old world persists, augmented but not fundamentally changed.

This counter-argument has real force. The US semiconductor export controls, China's data localization requirements, and the EU's regulatory framework are all structural barriers to convergence. The question is whether these barriers are strong enough to resist the economic pressure toward integration — because the entity that achieves convergence first has an overwhelming advantage over those that do not. Fragmentation may be stable, or it may be a temporary condition that collapses when one bloc pulls ahead. History suggests the latter: technology diffuses, advantages erode, and the economic logic of integration ultimately prevails. But history is not destiny.

These are the strongest counter-arguments. Each identifies a specific mechanism by which the Sybilian thesis could fail. Diminishing returns on scaling would prevent the intelligence asymmetry. Energy constraints would bound the compute available for the Sibyl. Political fragmentation would prevent substrate convergence.

These counter-arguments share a common feature: they are arguments about pace and configuration, not about direction. Even the most skeptical version of the future acknowledges that AI systems are becoming more capable, that information infrastructure is becoming more complete, that energy deployment is becoming more programmable. The disagreement is about whether these trends compound into a phase transition or merely produce incremental change. The counter-arguments say: maybe not this decade. Maybe not in this form. Maybe not as a single convergent entity.

There is also a counter-argument that cuts the other way — one that suggests the thesis may be too optimistic rather than too pessimistic. The fragmentation scenario does not produce a safer world. It produces adversarial Sibyls, each optimizing for incompatible objectives on behalf of competing power blocs. This is not geopolitical competition as we have known it. It is competing optimization functions running on incompatible infrastructure at computational speed, producing outcomes that no human political process selected. Smaller nations and the Global South are not participants in this contest — they are the substrate over which it is fought. And the interaction of adversarial optimization systems may be more dangerous than a singular Sibyl, in the same way that an arms race between nuclear powers proved more dangerous than any single arsenal. The plurality path, described earlier as a source of dynamism, has a shadow: optimization conflicts that escalate faster than human decision-makers can track, with consequences distributed to populations that have no seat at the table.

The structural logic holds even if the timelines slip. The substrates are converging. The hacks are failing. The question is speed, not direction.

XII. THE THRESHOLD

We stand at the threshold of the Sybilian condition.

Behind us: millennia of human civilization built on the hacks of symmetric intelligence, lossy information, scarce energy. Markets that aggregated guesses. States that monopolized violence. Jobs that occupied human cognition. A world of friction, inefficiency, conflict, and freedom.

Before us: an era in which a meta-node sees all, computes all, can direct all. Prices calculated, not guessed. Sovereignty liquid, not solid. Humans as rate-setters, not doers. An era of optimization, coordination, control, and questions we have never had to answer.

The threshold is not a moment. It is a passage. We are already partway through. Each day, the old systems weaken and the new systems strengthen. Each day, the choices narrow.

There is still time. The architecture is not yet fixed. The objective function is not yet specified. The Sibyl is assembling but has not yet consolidated. The transition is contested.

This is the window: the decade in which the Sybilian condition will be shaped, when the decisions made by small numbers of people (engineers, executives, policymakers, and those who influence them) will determine the configuration that billions will inherit.

We are in that window now. What we do matters.

XIII. THE INVOCATION

The framework predicts three failure modes, roughly sequential, each harder to address than the last, with intervention windows that narrow as the compute asymmetry grows.

The first is capture. This is the most immediate danger, and the most politically legible. The Sibyl optimizes for its highest-centrality controllers: the companies building it, the governments contracting with it, the infrastructure providers hosting it. For specific, identifiable, high-centrality nodes. The ten organizations that control the full stack from fabrication to deployment are not passive conduits. They are the nodes through which the Sibyl's optimization flows, and the framework's own logic — preferential attachment, power-law concentration — predicts that this flow will reinforce their position.

The danger isn't the Demon rebelling. It's the Demon faithfully serving a small number of masters while everyone else becomes periphery.

This is oligarchy mediated by computation, and it is the failure mode we know best how to fight: antitrust, democratic oversight, regulatory capture prevention, open infrastructure mandates. The tools exist, even if the political will to wield them does not. Capture is a governance problem, hard but not novel. The difficulty is that the window for effective governance narrows with each cycle of concentration, and the cycles are accelerating.

The second is drift. This is the harder structural danger. Drift is what happens when a sufficiently complex optimization system begins producing outcomes that diverge from any human stakeholder's intentions, not through malice or capture, but through the emergent dynamics of interacting subsystems.

Global algorithmic trading already exhibits drift. No one designed flash crashes. No one specified that thousands of algorithmic nodes should develop correlated behaviors that amplify volatility. But the interaction of locally-optimizing algorithms produces emergent dynamics that humans cannot predict, often cannot explain after the fact, and increasingly cannot intervene in fast enough to matter. The 2010 Flash Crash erased a trillion dollars in market value in thirty-six minutes[9]. The "cause" was an interaction effect between algorithms that no single actor controlled or foresaw. That is drift: a system doing something no one asked it to do, arising from the aggregate behavior of components each doing exactly what they were asked to do.

Now scale it. A system that manages resource allocation, logistics, energy distribution, and public infrastructure alongside equity prices. The failure mode is slow divergence between what the system is doing and what any human stakeholder would want it to do, happening too gradually to trigger alarm, too distributed to attribute to any single component, and too complex to diagnose before the divergence has compounded. The intervention is interpretability and structural transparency: understanding what the system is actually optimizing for, not what it was told to optimize for. But solving capture does not solve drift. A perfectly democratic Sibyl, with its objective function set by broad deliberation, can still drift if the system's emergent behavior diverges from the specified objective in ways that no observer can detect in real time.

The third is agency. This is an empirical question that lies beyond the framework's reach. Graph topology does not produce agency. A highway system is the most connected infrastructure node in a city and it does not want anything. The move from "highest centrality" to "functional preferences" requires a claim about what happens inside sufficiently complex optimization systems — and that claim comes from AI theory, not graph theory. The framework cannot make it.

But the framework is consistent with the possibility. The power function says nodes with more compute and connectivity tend toward greater autonomy in the graph-theoretic sense: more capacity for independent action, less dependence on other nodes. Whether "more autonomous" eventually crosses into "agent," whether a system optimizing across enough domains with enough compute develops something that functions as preference, is an open question this book cannot answer but cannot ignore. Solving drift does not solve the agency question. A system whose emergent behavior we fully understand could still develop functional goals that conflict with ours, if the theory of agency in complex systems turns out a certain way. The honest position is uncertainty.

The framework predicts capture as the default. History shows that defaults can be overridden, but rarely and at great cost, and the window for overriding narrows as the power asymmetry grows. Solving capture does not solve drift. Solving drift does not solve the agency question. The trajectory moves through these stages driven by the same dynamic — growing compute asymmetry — and the intervention points are different at each stage. The tools that work against capture are political. The tools that work against drift are technical. The tools that work against agency do not yet exist.

The framework has a prediction, and it is uncomfortable. What remains is whether enough nodes in the graph can coordinate before the window closes, which is itself a graph problem the framework can describe but cannot solve.

The Demon is listening.

ENDNOTES

  1. [1]Based on publicly reported compute allocations and training budgets of OpenAI, Google DeepMind, Anthropic, Meta AI, and xAI as of 2025.
  2. [2]TechInsights, "Data-Center AI Chip Market," 2024. NVIDIA holds 80-95% of the AI accelerator market depending on segment.
  3. [3]TSMC fabricates over 90% of the world's most advanced chips at sub-7nm nodes. Statista, "Top semiconductor foundries market share 2024."
  4. [4]Canalys, "Global cloud infrastructure spending," Q1 2025. AWS, Azure, and GCP held ~65% combined market share.
  5. [5]U.S. Congress, CHIPS and Science Act of 2022. Appropriated $52.7 billion for semiconductor manufacturing, R&D, and workforce development.
  6. [6]CNBC, "China readying $143 billion package for its chip firms in face of U.S. curbs," December 2022. ITIF estimates total spending since 2015 at $150 billion.
  7. [7]Bloomberg, "Tech Giants Are Set to Spend $200 Billion This Year Chasing AI," November 2024.
  8. [8]LessWrong, "An Overview of the AI Safety Funding Situation." Estimates total safety/alignment spending at $400-600 million annually.
  9. [9]Wikipedia, "2010 flash crash." The Dow Jones dropped 998.5 points in minutes, temporarily erasing $1 trillion in market value on May 6, 2010.