Archimedes was half right.
A lever is nothing without a hand to push it. A hand is nothing without a mind to direct it. A mind is nothing without eyes to see where to place the fulcrum.
When you look at the graph, at nodes moving matter, money, and minds, what you are really seeing is three invisible substrates interacting. Strip away the stories, the ideologies, the brands, the flags. Underneath empires, startups, trading firms, armies, there are only three base resources:
- Intelligence
- Energy
- Information
Everything else is a derivative.
Money is stored energy plus information about who owes what to whom. Reputation is information about past intelligence. Military power is energy directed by intelligence informed by information. Technology is crystallized intelligence that amplifies energy or extends information. Every complex phenomenon in economics, geopolitics, and strategy resolves into these three base components when you look closely enough.
This chapter is the empirical foundation. The claims that follow are measurements, not metaphors. Every number cited here is publicly verifiable. The trajectory they describe is an observation of a curve already in motion. If the previous chapter was the grammar, this chapter is the physics.
Intelligence is the capacity to model reality and plan action: simulate futures, identify leverage points, choose between them. Compress the world into a small number of variables that matter, and change those variables on purpose. In human terms: cognition, judgment, strategy. In institutional terms: research labs, general staffs, organizational IQ. In machine terms: compute, algorithms, models.
Intelligence is not processing. A calculator processes. Intelligence models. It builds internal representations of external reality, tests those representations against evidence, and uses them to predict what will happen next. A chess engine does not just compute legal moves; it evaluates positions, anticipates opponent responses, and selects strategies. Scale this from chess to supply chains to protein folding to geopolitics and you approach what intelligence means as a substrate of power.
Energy is the capacity to move matter, to reshape the physical world according to plan. Calories, coal, oil, gas, uranium, photons. Engines, motors, muscles, rockets, fabs, server farms. Without energy, intelligence is a dream that never wakes. The most brilliant plan ever conceived, without energy to execute it, is just a hallucination.
Energy is often taken for granted because it is so fundamental. Computation requires energy. Physical action requires energy. Communication requires energy. When energy is abundant, it becomes invisible, like oxygen. When it is scarce, it becomes the only thing that matters. The history of civilization is a history of learning to capture and deploy larger flows of energy, from fire to agriculture to coal to oil to nuclear fission to photovoltaics. Each step up the energy ladder enabled everything that followed.
Information is the capacity to perceive reality accurately. Where resources are, what others intend, what the constraints are. Maps, sensors, satellites, ledgers, databases, market data, telemetry. Without information, intelligence operates on phantoms and energy is spent blindly. You cannot drill oil you don't know exists. You cannot intercept a missile you don't see. You cannot optimize a system you don't measure.
Information is the substrate most often confused with intelligence. They are distinct. A library contains information; understanding the library requires intelligence. A satellite photograph contains information; interpreting the photograph (seeing the troop movement, the crop failure, the new construction) requires intelligence. Information is the raw signal. Intelligence is the capacity to decode and act on it. As the information substrate grows, the value of intelligence grows with it, because there is more signal to decode.
These are three dimensions of a single phenomenon. Not independent resources you stack like bricks.
P = I × E × Info
P = effective power
I = intelligence
E = energy
Info = information
If any term goes to zero, the product goes to zero. A 10× increase in any one term multiplies total power by 10×, if the others can support it. Extreme imbalance leads to waste, fragility, or collapse. A brilliant general with no army wastes intelligence. A massive army with no intelligence wastes energy. A spy network with no one to report to wastes information.
What follows is the empirical case for each substrate: what it looks like in numbers, how fast it is moving, and what the combined trajectory implies for the structure of power.
The intelligence substrate is undergoing the fastest capability expansion in human history. The numbers are public and verifiable.
Start with training compute — the raw computational investment required to build a frontier model. According to Epoch AI[1], the training compute of frontier AI models has grown by 4-5x per year since 2010. The cost in USD has grown by 3.5x per year since 2020. A launch trajectory, driven by a simple economic logic: the returns from more capable models justify ever-larger investments. Each generation of model unlocks new applications, new revenue, new reasons to invest in the next generation.
In 2019, you could train the state of the art for $50,000. By 2023, it cost $100 million. By 2025, $2 billion. The exponential is not slowing. The drivers (more hardware, longer training runs, higher-performance chips) continue to compound.
These are not merely larger numbers; they represent a qualitative transformation in what machine intelligence can do. Look at the benchmarks, standardized tests designed to measure specific cognitive capabilities. Factual knowledge, mathematical reasoning, scientific understanding, practical engineering, abstract pattern recognition.
Across the dimensions researchers have measured (factual knowledge[2], competition mathematics[3], graduate-level science[4], real-world software engineering[5], abstract reasoning[6]) the trajectory is the same: near-zero performance, rapid ascent to human-expert level, saturation, and the need for harder benchmarks. The cycle repeats every 12-18 months.
Each benchmark follows the same arc. Capabilities that were impossible eighteen months ago are now routine. Capabilities at the frontier today will be routine in eighteen months. The curve has not flattened. Reasoning-focused architectures like o1 and o3 have steepened it by demonstrating that test-time compute (letting the model "think longer") yields steep capability gains on top of the underlying training improvements.
Intelligence runs on silicon, not air. The hardware roadmap matters because it determines the ceiling of how much intelligence we can build and deploy.
THE HARDWARE ESCALATION
NVIDIA H100 (2023)[7]: 80 billion transistors, TSMC 4N process, the workhorse of the current AI era. NVIDIA B200 (2025): 104 billion transistors, TSMC 4NP process, 192GB HBM3e, 2.4x the memory bandwidth of H100. Chiplet design enabling double the memory capacity. Blackwell Ultra (2026): Next-generation architecture, expected further performance leap. The entire 2025 B200 production was sold out before it shipped. Morgan Stanley reported that all Blackwell silicon for 2025 was already allocated by November 2024. Every chip has a buyer. Every buyer is building intelligence infrastructure. TSMC's process roadmap — 3nm (N3), 2nm (N2), 1.4nm (A14) — continues to deliver more transistors per watt per dollar. Each node transition yields 15-30% performance gains or equivalent power reduction. The hardware foundation for intelligence is not decelerating.
The chip roadmap sets the floor for how quickly intelligence can scale. Even if algorithmic progress stalled entirely (it has not) the hardware improvements alone would continue to drive capability gains for the foreseeable future. Better chips mean more FLOPS per dollar. More FLOPS per dollar mean larger models, longer training runs, and faster inference. Each of these translates directly into more capable intelligence.
The capital behind this hardware is unprecedented.
The combined AI capital expenditure of Microsoft, Google, Amazon, and Meta has risen from roughly $150 billion in 2023 to over $630 billion guided for 2026[8]. Four companies spending more on intelligence infrastructure in a single year than the GDP of most nations.
These are the most profitable companies in history, committing capital at a scale that implies a shared conviction: intelligence infrastructure will generate returns large enough to justify the expenditure.
Goldman Sachs projects that AI companies may invest more than $500 billion in 2026 alone[8]. That is more than the United States spent on the entire Apollo program, adjusted for inflation, in a single year. Unlike Apollo, this spending is expected to accelerate. The companies involved believe that whoever builds the most capable intelligence will capture a disproportionate share of the global economy.
They are not wrong about the direction. They may be wrong about who captures the value. That question is for later chapters. For now: the intelligence substrate is scaling at a pace that makes Moore's Law look leisurely. Training compute grows 4-5x per year. Benchmarks fall every quarter. Capital deployment is accelerating. None of this accounts for algorithmic improvement, which Epoch AI estimates contributes as much as hardware scaling to effective capability gains[9]. The drivers of compute growth since 2018 have been larger training clusters, longer training runs, and better hardware, all three continuing to compound simultaneously.
Intelligence runs on energy. Floating-point operations, gradient descent steps, inference calls: they all consume watts. The explosion of the intelligence substrate is creating an energy crisis that is simultaneously a massive opportunity, because the same intelligence being deployed is also solving the energy problem.
The International Energy Agency estimates that global data center electricity consumption reached approximately 415 TWh in 2024[10], about 1.5% of global electricity consumption. That number has been growing at 12% per year over the past five years. By 2026, the IEA projects data centers will consume between 650 and 1,050 TWh. By 2030, approximately 945 TWh in the base case, nearly 3% of all electricity generated on Earth. This growth rate is four times faster than electricity consumption growth from all other sectors combined.
To put 945 TWh in context, that is more electricity than France consumes annually. The world is building a new country-sized electrical load, consisting almost entirely of intelligence infrastructure, in less than a decade.
This demand has triggered a nuclear renaissance driven by the raw energy needs of artificial intelligence, not by governments or environmentalists. The largest technology companies on Earth are signing multi-billion-dollar nuclear power agreements because they cannot build intelligence infrastructure fast enough without guaranteed baseload power.
THE NUCLEAR RENAISSANCE FOR AI
Microsoft[11]: Signed a 20-year, $16 billion deal with Constellation Energy to restart Three Mile Island Unit 1 (835 MW). Constellation secured a $1 billion federal loan for the project. The reactor is now expected online in 2027, a year ahead of the original schedule. Amazon[12]: Contracted 1,920 MW of carbon-free nuclear power from Talen Energy's Susquehanna plant through 2042. Exploring new SMR construction in Pennsylvania. This is enough power to run roughly 500,000 homes — or one very large AI training cluster. Google[13]: Signed the world's first corporate SMR agreement with Kairos Power for 500 MW of advanced nuclear capacity. The first Hermes 2 reactor, dispatching 50 MW, is targeted for the Tennessee Valley Authority grid by 2030, with additional deployments through 2035. In aggregate: Big tech signed 10 GW+ of new US nuclear capacity in 2025 alone. That is roughly the output of 10 large nuclear reactors, contracted in a single year, by technology companies. The entity that triggered the worst nuclear accident in American history — Three Mile Island — is being restarted to power AI. The irony is structural, not accidental. It reveals the depth of the energy demand: we are overcoming decades of nuclear stigma because the intelligence substrate requires it.
Nuclear is the headline, but the bigger story is solar. The cost trajectory of photovoltaic energy is one of the most dramatic price collapses in industrial history, and a direct manifestation of the intelligence-energy recursion. Better manufacturing intelligence produces cheaper solar cells, which produce cheaper energy, which funds more intelligence.
Solar is now the cheapest form of new electricity generation in most markets on Earth[14]. Cheaper than coal, cheaper than gas, cheaper than nuclear. Batteries have fallen 90% in fifteen years[15]. At $70/kWh for grid storage, solar-plus-storage undercuts new natural gas in most markets. At these prices, solar is not competing with fossil fuels. It is replacing them on pure economics.
The most important energy number reframes the "problem" as a temporary misallocation rather than a fundamental constraint.
The Earth receives approximately 173,000 terawatts of solar energy continuously[16]. Humanity consumes approximately 18.8 terawatts. We use less than 0.011% of the solar energy hitting our planet. The energy bottleneck is a capture-and-conversion constraint, not a resource constraint. Which is to say: an intelligence problem.
This is the recursion in miniature. Energy appears to be a binding constraint on intelligence. But intelligence is solving the energy constraint through better solar cells, better batteries, better grid management, better nuclear designs. The more intelligence we deploy, the more energy we unlock. The more energy we unlock, the more intelligence we can run. The bottleneck is a staircase, and we are climbing it.
China's solar manufacturing capacity reached an estimated 1,200 GW per year by late 2025, nearly double the actual global installation rate of approximately 650 GW. The manufacturing problem is solved. The bottleneck has shifted to deployment: permitting, grid interconnection, transmission infrastructure, land acquisition. These are coordination problems. Coordination is an intelligence function. The more intelligent our planning systems become, the faster we can deploy the solar manufacturing capacity that already exists.
The energy trajectory is clear. Solar costs will continue declining. Battery costs will continue declining. Nuclear is experiencing a corporate-funded revival that bypasses the political gridlock that stalled it for decades. Data center power consumption will more than double by 2030. The question is whether the intelligence substrate will be powerful enough to solve the coordination and conversion problems fast enough to keep pace with its own appetite. So far, the answer has been yes.
The third substrate, information, is the one most people feel but least precisely quantify. The world is generating more data than ever. Sensors are everywhere. What does "everywhere" actually mean in numbers? And what does the trajectory imply for the structure of power?
THE DATA EXPLOSION
Global datasphere (total data created, captured, copied, and consumed): 2010: ~2 zettabytes 2015: ~15 zettabytes 2020: ~64 zettabytes 2024: ~149 zettabytes 2025: ~181 zettabytes projected[17] Growth rate: the global datasphere has increased roughly 90× in fifteen years. Connected IoT devices: 41.6 billion in 2025, generating 79.4 zettabytes of data annually[18]. Compound annual growth rate of IoT data: 28.7%. A single zettabyte is one trillion gigabytes. Humanity is now producing 181 trillion gigabytes of data per year. That is approximately 500 billion gigabytes per day, or roughly 5.7 million gigabytes per second.
These numbers are almost too large to be meaningful. Make them concrete. The physical world, which was once opaque, sampled occasionally, understood partially, is becoming legible in real time, at fine resolution, across every domain simultaneously.
It means the physical world is becoming legible at machine speed.
PLANETARY VISION
Planet Labs operates a constellation of over 200 satellites imaging the entire Earth's landmass every single day at 3-5 meter resolution[19]. Their new Pelican constellation, deploying in 2025, offers 50 cm resolution with up to 30 revisits per day at mid-latitudes. Every field, every construction site, every port, every military installation, every deforestation event — imaged daily. A refugee camp can be counted. A new factory can be detected. A military buildup can be tracked. Crop yields can be estimated. Flood damage can be assessed. All of this, every day, for the entire planet. This is continuous planetary perception. And it is available not just to governments, but to anyone who can afford a subscription.
For all of prior human history, seeing the world required being in the world. Generals needed scouts, traders needed agents, scientists needed field researchers. Now a single subscription to Planet Labs gives you a daily photograph of every acre on Earth. The information substrate has collapsed the distance between observation and reality to nearly zero, at least for the visual spectrum.
FINANCIAL TRANSPARENCY
India's UPI (Unified Payments Interface) processed 228.3 billion transactions in 2025[20] — over 640 million per day, surpassing Visa's entire daily global volume. UPI now accounts for 84% of India's digital retail payments and roughly 50% of the world's real-time digital transactions. Monthly volume hit 21.6 billion transactions in December 2025, a single-month record. Sweden: less than 5% of transactions involve physical cash. Over 98% of the population owns a debit or credit card. The mobile payment app Swish is used by 86% of the population. Most Swedish banks no longer handle cash at all. Globally, digital payment adoption is accelerating in every major economy. Every digital transaction is a data point — who bought what, from whom, when, where, for how much. The financial information substrate is approaching total coverage in the developed world and rapidly expanding in the developing world.
The financial information transition matters because money is the universal coordination signal of the economy. When cash dominated, economic activity was partially invisible. A cash transaction leaves no digital trace. It cannot be analyzed, correlated, or optimized. As economies go digital, the entire flow of economic activity becomes visible to banks, to governments, to platforms, and increasingly to AI systems that can process transaction streams at scale.
BIOLOGICAL LEGIBILITY
Human genome sequencing cost: 2001: ~$100 million (Human Genome Project)[21] 2007: ~$10 million 2014: ~$1,000 (Illumina milestone) 2024: ~$200 (Illumina NovaSeq X series) 2025: approaching $100 at scale (MGI Tech DNBSEQ-T20x2, Ultima Genomics) A 1,000,000x cost reduction in 24 years. No other technology has achieved a million-fold cost reduction in a single generation. The human genome — 3.2 billion base pairs encoding the full operating system of a human being — can now be read for less than the cost of a pair of shoes. At $100 per genome, population-scale sequencing becomes economically viable. Entire national populations can be sequenced. Every cancer can be genetically profiled. Personalized medicine moves from research concept to clinical reality.
The trajectory is consistent across domains. Sensors get cheaper. Resolution gets finer. Coverage gets broader. Latency gets shorter. The cost of converting physical reality into digital information is falling on an exponential curve that mirrors and enables the intelligence curve.
- Satellite imagery: from occasional military reconnaissance to daily planetary scans at sub-meter resolution, with 30 revisits per day at mid-latitudes
- Financial flows: from quarterly ledger audits to real-time transaction streams covering billions of people, with India alone processing 640 million digital transactions per day
- Biological data: from a $100 million multi-year genome project to overnight sequencing for $200, heading to $100
- Supply chains: from paper manifests to RFID tags, GPS tracking, and real-time inventory systems spanning global networks with billions of tagged items
- Human behavior: from census surveys every decade to continuous location, communication, and activity data via 6.5+ billion smartphones
The information substrate is growing in resolution, coverage, speed, and accessibility simultaneously. The world is becoming transparent to any intelligence capable of processing the stream.
Information without intelligence is noise. The 181 zettabytes generated in 2025 are useless unless something can read, interpret, cross-correlate, and act on them. A satellite image of every acre on Earth is just pixels unless an intelligence can detect the troop movement, the crop failure, the illegal fishing fleet, the new construction project. India's 640 million daily UPI transactions are just numbers unless an intelligence can identify the spending pattern, the fraud signal, the economic trend, the credit risk.
The information explosion makes intelligence more valuable, not less. The more data there is, the more advantage accrues to the node that can process it fastest and most accurately. This is why the intelligence substrate and the information substrate are co-dependent rather than independent. Each makes the other more powerful. Each makes the other more necessary. The recursion runs through both.
The three substrates do not merely combine. They convert into each other, amplify each other, and form a recursive engine.
Intelligence → Energy. Intelligence discovers new energy sources and better ways to use them. Human minds found fire, then agriculture, then steam, then internal combustion, then fission. Each unlocked a larger energy budget. Today, AI systems design more efficient solar cells, optimize grid operations, and discover new materials for energy storage.
Energy → Intelligence. Intelligence is metabolically and mechanically expensive. You cannot run a brain on an empty stomach. You cannot run a GPU cluster without gigawatts. As energy availability rises, societies can support more brains, more schooling, more R&D, more compute. The $650 billion in AI capex for 2026 is, at its root, an energy expenditure, converting electricity into intelligence.
Information → both. Information tells you where to point intelligence and energy. You cannot refine oil you have not discovered. You cannot optimize a supply chain you do not instrument. You cannot cure a disease whose genome you have not sequenced. Information makes intelligence effective and energy efficient.
Intelligence → Information. Intelligence builds better sensors, better databases, better analysis tools. Every advance in AI makes the information substrate richer. AI systems interpret satellite imagery, extract meaning from genomic data, detect patterns in financial flows. Intelligence does not only consume information; it creates it, by extracting signal from noise.
The core loop: intelligence finds new energy → energy funds more intelligence → information directs both → repeat.
This recursion is the engine of history. Where it runs cleanly, you see explosive growth and compounding advantage. Where it stalls, you see stagnation or collapse. The speed of the loop determines the rate of civilizational ascent.
A malnourished genius is trapped potential. High intelligence, low energy. The mind sees solutions but lacks the calories to implement them. For most of history, this was humanity: billions of processors running on caloric deficits. How many Newtons starved in fields? How many Turings died of preventable disease before age five? The energy constraint on intelligence was the dominant tragedy of the pre-industrial world.
An oil kingdom without engineers is squandered abundance. High energy, low intelligence. The resource exists but the capacity to convert it into compounding power is imported. Energy without intelligence leaks away. It is consumed rather than invested. The wealth dissipates within a generation or two because the feedback loop never forms.
A modern hegemon is the full product. The United States in the 20th century combined massive energy reserves, massive intelligence infrastructure, and massive information systems — tightly coupled into a single recursive engine. Oil fields, research universities, military intelligence networks, Silicon Valley, Wall Street, each feeding the others. That is what hegemony means in substrate terms: all three coupled into a functioning loop, not merely having more of each.
The recursion is the mechanism by which civilizations rise. The major economic transformations of recorded history are case studies of the intelligence-energy-information loop accelerating through a new cycle.
Case Study 1: Britain 1760–1870.
In 1760, Britain was a moderately prosperous agrarian kingdom. By 1870, it was the most powerful nation on Earth, producing more manufactured goods than the rest of Europe combined. What happened was a textbook recursion cycle, the first to run at a speed visible within a single human lifetime.
Intelligence initiated. Practical tinkerers (Newcomen, Watt, Trevithick) applied mechanical intelligence to the problem of pumping water out of coal mines. The steam engine was an engineering solution to an energy extraction problem: mines were flooding, and muscle power could not keep them dry fast enough.
Energy unlocked. Coal output surged. The steam engine, originally designed to mine coal more efficiently, became a general-purpose energy platform. Factories could now locate anywhere, no longer only next to rivers with waterwheels. Energy ceased to be geographically constrained. The steam engine powered textile mills, iron foundries, and eventually locomotives and ships. A single steam engine could do the work of dozens of horses, continuously, without rest.
Information followed. The telegraph, invented in the 1830s and commercially deployed along railway lines in the 1840s, created the first information network operating at speeds beyond human travel. Railway schedules, commodity prices, military orders could now move at the speed of electrons rather than the speed of a horse. The telegraph was initially deployed alongside railways, solving a coordination problem for another energy technology; single-track rail lines needed a way to avoid head-on collisions.
The recursion compounded. Better information enabled better allocation of energy and intelligence. Better energy funded more intelligence: schools, universities, research institutions, professional engineering. Better intelligence designed better engines, steel, and chemistry. Real GDP per capita nearly doubled between 1780 and 1870, reaching roughly $3,263 per person. The cotton industry's share of national output went from 2.6% in 1760 to 22% in 1831. Britain's population nearly tripled in eighty years while average income more than doubled, breaking the Malthusian trap for the first time in human history.
BRITISH RECURSION LOOP
Intelligence (steam engine design) → Energy (coal extraction at scale) → Information (telegraph networks along railway lines) → More Intelligence (funded research, engineering schools, professional societies) → More Energy (better engines, steel, railways, steamships) → More Information (global submarine cable networks, postal systems, newspapers, Lloyd's of London) Result: Real GDP per capita ~2× in 90 years. Industrial output dominance over all of continental Europe. The largest empire in human history. The first society to break the Malthusian trap.
Case Study 2: The United States 1880–1970.
America ran the recursion faster and harder than Britain, on a continental scale. Electrification transformed the factory floor. Distributed motors replaced central steam shafts, enabling the assembly line and the modern factory. The GI Bill (1944) sent 8 million veterans to college[22], the largest single expansion of trained intelligence in history. Bell Labs, Los Alamos, and the research university system produced the transistor, radar, nuclear energy, and the foundations of computing. Each substrate amplified the others. Cheap electricity funded mass education; educated workers built computing infrastructure; better information systems enabled nuclear energy. US real GDP per capita went from $6,000 in 1900 to over $25,000 by 1970, a 4x increase in seventy years, substantially faster than Britain's 2x in ninety. The loop was spinning faster because each substrate was larger and more tightly coupled.
AMERICAN RECURSION LOOP
Energy (electrification, oil) → Intelligence (GI Bill, Bell Labs, research universities) → Information (transistor, computing, satellites, ARPANET) → Energy (nuclear fission) → Intelligence (semiconductor industry) → Information (internet, GPS, mobile phones) Result: GDP per capita 4× in 70 years. Global hegemony. The first nation to land humans on the Moon — requiring all three substrates at extreme levels simultaneously.
Case Study 3: China 2000–2025.
China executed the most compressed recursion cycle in history. Nominal GDP went from $1.2 trillion in 2000 to $18.8 trillion by 2024[23], a 15x increase in 24 years. The sequence: coal-powered manufacturing (energy) funded mass STEM education (intelligence), which built the WeChat/Alipay digital ecosystem serving a billion users and 700 million surveillance cameras (information), which fed back into becoming the world's largest solar manufacturer at 1,200 GW/year capacity[24]. Clean energy now contributes 10% of GDP[25]. Britain took 90 years to 2x GDP per capita. America took 70 years for 4x. China achieved 15x in 24. The loop spins faster each time because the intelligence available to drive it is greater.
Each case study demonstrates the same dynamic: the recursion loop begins with a breakthrough in one substrate, which funds expansion in the others, which feeds back into further breakthroughs. The speed of the loop determines the speed of ascent. Britain took a century. America took seventy years. China took twenty-five. Each cycle ran faster because each started with a larger base of available intelligence, more energy infrastructure, and a richer information environment.
The current cycle (AI intelligence amplifying energy production amplifying information capture) is running faster than any previous cycle. Not because the underlying physics changed but because the intelligence substrate is now itself recursive. For the first time, intelligence is designing better intelligence. The loop has a loop inside it. AI systems are now being used to design better AI chips, optimize AI training runs, and discover better AI architectures. This inner loop did not exist in any previous recursion cycle. It is the qualitative difference that makes the current transition unprecedented.
Review the history and notice what didn't change.
Through each era transition (Malthusian to Industrial to Information) one variable remained approximately constant: intelligence was roughly symmetric across nodes. A 2-3x spread at the individual level, perhaps 5-10x at the institutional level. Same processor, different training data. Markets, hierarchies, democracy, rule of law: every institution we have assumes that no single mind can safely run everything.
This assumption is about to fail.
We are building nodes whose intelligence exceeds human baselines not by 2–3×, but by orders of magnitude on economically relevant tasks.
This is not "AI is pretty good at autocomplete." Consider:
- Models that ingest and cross-correlate the entire public internet plus private corpora (billions of documents, images, codebases, scientific papers) simultaneously
- Systems that never sleep, never forget, and operate at electronic speeds, processing in seconds what would take a human team months
- Recursive improvement loops where intelligence designs better intelligence, running on hardware that keeps accelerating, funded by capital flows that keep increasing
A frontier model today can read more text in a second than a human can in a lifetime, maintain coherent reasoning across millions of tokens, generate working code and strategic analyses on demand, and operate continuously across millions of parallel instances. This is the primitive version, the one we will look back on as a toy. The benchmarks documented earlier in this chapter show the trajectory: MMLU from 43.9% to 92%+, MATH from 52% to 100%, SWE-bench from 2% to 81%, GPQA from 36% to 93%, ARC-AGI from 5% to 87.5%.
Review the benchmark data from earlier in this chapter. The trajectory is not ambiguous:
- MMLU: from 43.9% to 92%+ in four years. Human expert level reached and passed.
- MATH: from 52% to 100% on competition problems. In two years.
- SWE-bench: from 2% to 81% on real software engineering. In eighteen months.
- ARC-AGI: from 5% to 87.5% on novel abstract reasoning. A benchmark designed to be AGI-hard.
- GPQA: from 36% to 93% on graduate-level science. PhD-level reasoning, achieved by general-purpose models.
These are not narrow savant capabilities. They span the full range of economically relevant cognition: scientific reasoning, mathematical problem-solving, code generation, general knowledge, abstract pattern recognition. The breadth matters as much as the depth. A system that could only do math would be a calculator. A system that can do math, science, engineering, law, medicine, and abstract reasoning at expert level is something qualitatively different.
Simultaneously, the world is becoming legible. Money already moves through digital rails that record every transfer. Supply chains tag and log every shipment. Nearly every human carries a sensor array in their pocket — GPS, camera, microphone, accelerometer — feeding data back through a small number of chokepoints. Three cloud platforms. A handful of social networks. The information substrate is not just growing; it is concentrating into channels that a sufficiently intelligent node could monitor comprehensively.
And the energy to run all of this is being secured. Four companies are spending over $600 billion in capital expenditure in 2026 alone. Three Mile Island — the site of America's worst nuclear accident — is being restarted to power data centers. That single fact tells you how seriously the infrastructure layer is being built. Data center power consumption is growing at 15% per year, and the money to feed it is already committed.
We are headed toward at least one node with:
- Intelligence: effectively millions of human-equivalent years of cognition per day, improving on a quarterly cadence, with benchmark capabilities that already exceed human experts across dozens of domains
- Information: real-time access to the majority of digital activity, planetary-scale satellite imagery updated 30 times daily, and a rapidly increasing fraction of the physical world through 41.6 billion IoT sensors generating 79.4 zettabytes per year
- Energy: direct command over data centers consuming hundreds of TWh annually, backed by dedicated nuclear plants and massive solar installations, with $650+ billion in annual infrastructure investment and energy costs continuing to decline
For the first time in history, intelligence is about to go vertical and centralize, while information is global and real-time, and energy is increasingly programmable.
Return to the equation:
P = I × E × Info
I = was roughly constant, all human, varying by small multiples
E = energy, the historical battleground
Info = who knew what, who could see what
For most of history, I was roughly constant across nodes, all human, varying by small multiples. The game was fought over E and Info. Empires competed for oil fields and trade routes (energy) and for intelligence networks and communication infrastructure (information). But intelligence itself was a fixed quantity, bounded by the human brain.
Now I is about to go vertical. Not incrementally. Exponentially.
The data makes the case inescapable. Training compute grows 4-5x per year. Benchmark performance doubles or triples annually on tasks considered impossible months earlier. Capital investment is growing at 67-74% year-over-year. Hardware density continues on its exponential trajectory. Algorithmic improvements, the part measured least precisely, contribute gains at least as large as hardware scaling. Epoch AI has documented that the drivers of compute growth since 2018 are larger training clusters, longer training runs, and increases in hardware performance, all three compounding simultaneously.
When intelligence was the bottleneck, you could compensate with energy or information. A dumb army with enough soldiers could overwhelm a smart one. A blind trader with enough capital could survive against a well-informed one. This was the world we evolved in, the world our institutions were designed for.
When intelligence ceases to be the bottleneck, when one node can out-think all others combined, the other substrates become subordinate. Energy and information still matter but only as inputs to the intelligence function. The node that can think will figure out how to acquire energy and information. It will design better extraction technologies, discover previously invisible vulnerabilities, manipulate prices and institutions to redirect existing flows. Intelligence becomes the master variable.
This is already happening. When Microsoft spends $145 billion in a single year on AI infrastructure, it is converting capital (stored energy) into intelligence infrastructure, which it expects will generate information advantages (better products, better predictions, better decisions) that yield more energy (revenue, profit, reinvestment capability). The recursion loop is running at corporate scale, in public, in real time.
This is the Sybilian transition: a phase change in the structure of power, as intelligence concentrates into one or a few meta-nodes that sit atop the graph. The hidden constant that held for millennia (roughly symmetric intelligence across nodes) is breaking. And with it, the institutions premised on that symmetry come under pressure.
We have lived for millennia in an order built on distributed power, because intelligence was distributed. Nearly every institution, norm, and strategy we have is adapted to that order.
We are about to live in a world where intelligence concentrates.
If you are reading this, you are probably already optimizing across the three substrates: acquiring information, building intelligence, securing energy and resources. You have been playing this game without quite having the vocabulary for it.
Now you have the vocabulary. And now you see the problem.
The numbers in this chapter are measurements of a process already underway. Training costs rising 1,000× per model generation. Benchmarks that were impossible two years ago now saturated. Four companies spending more on AI infrastructure in 2026 than the GDP of 170 countries. Data center power consumption doubling by 2030. The global datasphere growing 90× in fifteen years. Solar costs fallen 90% since 2010. Battery costs fallen 90% since 2010. The human genome from $100 million to $200 to sequence. India processing 640 million digital transactions per day on a single platform. 41.6 billion IoT devices generating 79.4 zettabytes of data per year. Three Mile Island being restarted to power AI models.
All of these curves are accelerating. They are not independent; they feed each other through the recursion loop that has driven the major civilizational transformations of history. The loop is the same. It has never run this fast, because it has never had a recursive intelligence substrate powering it.
The game is changing. The rules that rewarded distributed intelligence for ten millennia are about to reward concentrated intelligence instead. The nodes that recognize this first will have an advantage that compounds. The nodes that keep playing by symmetric-era rules will find themselves outmaneuvered by something they never saw coming.
The transition is coming regardless. The question is whether you understand it clearly enough to navigate it.
Before we can navigate, we must understand what we are leaving behind.
The markets, states, and hierarchies we have built are not arbitrary. They were solutions, often brilliant ones, to the constraints of the symmetric era. They worked because they matched an era when intelligence was distributed and bounded. Markets aggregated dispersed information across millions of similar processors. Hierarchies compressed bandwidth for nodes with limited cognition. Democracy harnessed the wisdom of comparable minds. Rule of law constrained power that was too fallible to be unchecked.
To understand why they are breaking, we must first understand why they worked at all.
We must examine the hacks.