$300 Billion in AI Infrastructure Spending: Where the Hyperscaler Capex Is Actually Going

3 min read

Key Claim

Microsoft, Google, Amazon, and Meta will collectively spend over $320 billion on AI infrastructure in 2026 — but more than 60% of that is going into power infrastructure, cooling, and data centre construction, not compute hardware, reflecting a constraint that has shifted from chips to electricity.

Key Takeaways

  • Microsoft guided $80B in capex for fiscal year 2026; Google committed $75B for 2025 alone; Meta set a $60–65B range; Amazon’s AWS capex exceeded $105B in 2025
  • Data centre power demand from AI is projected to reach 1,000 TWh globally by 2026 — equivalent to the entire electricity consumption of Germany
  • Microsoft has signed long-term power purchase agreements with nuclear operators, including Three Mile Island, specifically for AI data centre load
  • Approximately 40% of announced AI data centre projects face construction delays due to power infrastructure bottlenecks, not chip supply

When analysts and journalists report hyperscaler AI capex numbers, the natural inference is that the money is going into GPUs. Some of it is — NVIDIA’s revenue confirms massive chip purchases. But a structural shift in where the constraint actually lies has changed the composition of AI infrastructure spending. The binding limit is no longer chips, which are constrained but available with sufficient advance commitment. The binding limit, in an increasing number of markets, is electricity.

The Numbers and What They Include

Microsoft’s fiscal year 2026 capital expenditure guidance of $80 billion represents roughly a 2x increase over FY2023 levels. Google committed $75 billion for calendar year 2025 — announced in a single earnings call, which moved Alphabet’s stock. Meta’s $60–65 billion range for 2025 is its highest capex year by a significant margin. Amazon’s AWS infrastructure investment exceeded $105 billion in 2025 across data centre construction, network build-out, and hardware.

These figures include GPU and custom silicon purchases, but they also include land acquisition, data centre construction, mechanical and electrical infrastructure, networking, and increasingly, direct power infrastructure investment. When Microsoft pays for a transmission line upgrade to connect a new Virginia data centre campus to the grid, that appears in capex. When Google funds a solar farm under a power purchase agreement, the capitalised infrastructure investment appears in the same line.

Why Power Has Become the Primary Constraint

A single NVIDIA H100 GPU consumes approximately 700 watts at peak load. A rack of 8 H100s — a standard configuration — draws roughly 10–12 kilowatts. A data centre with 100,000 H100s, which is the scale of major AI training clusters, draws approximately 70–80 megawatts. That is the continuous power consumption of a small city, drawn from a single grid connection point.

Grid connections of this scale are not available on demand. Utility interconnection queues in the United States — the formal process for connecting large power consumers to the grid — currently run 4–7 years in most regions. Hyperscalers with the resources to sign long-term power purchase agreements and fund transmission infrastructure upgrades can accelerate this timeline. Organisations without those resources cannot. The power constraint is, in practice, a barrier to entry that entrenches the hyperscaler position in AI infrastructure.

Nuclear, Gas, and the Energy Mix

Microsoft’s 20-year power purchase agreement with Constellation Energy to restart the Three Mile Island nuclear plant — Unit 1, not the unit that experienced the 1979 accident — is the most prominent example of hyperscalers moving up the energy supply chain. The deal, valued at approximately $1 billion, secures 835 megawatts of carbon-free baseload power specifically for Microsoft’s data centre operations. Google has signed similar agreements with nuclear operators and has committed to running on 24/7 carbon-free energy by 2030.

Not all of the power sourcing is carbon-free. Natural gas-backed data centre capacity is increasing alongside renewable and nuclear procurement, particularly for facilities that require guaranteed baseload power in regions where renewable intermittency creates reliability risk. The AI industry’s energy footprint is growing faster than its renewable energy procurement, despite significant public commitments to the contrary.

What This Means for AI Pricing and Access

The capex intensity of AI infrastructure creates a structural dynamic for API pricing over the next three years. Hyperscalers are deploying capital now — at rates that imply significant fixed cost commitments — on the expectation that AI inference demand will justify the investment. If demand scales as projected, the fixed costs get amortised across growing query volumes, and API prices fall. If demand growth is slower than the capex curve, the math reverses.

For enterprise AI buyers, the immediate practical implication is that AI API pricing is likely to remain under downward pressure through 2026–2027 as new capacity comes online and hyperscalers compete for workloads. The medium-term implication is that power infrastructure — not chips — will determine where AI data centre capacity gets built and who can access it at what cost.

Source Trail

Microsoft FY2026 capex guidance (earnings call, Jan 2026) · Alphabet Q4 2024 earnings · Meta Q4 2024 capex announcement · AWS 2025 infrastructure investment disclosures · IEA Electricity 2026 report · Constellation Energy / Microsoft Three Mile Island PPA filing

Avatar photo

About Sarah Chen

Sarah Chen analyses the economic forces shaping the AI industry — venture capital flows, enterprise spending, and market concentration. She holds an MBA and previously covered enterprise software and fintech at a specialist research firm. Her coverage draws on SEC filings, earnings calls, and primary financial data to find the signal beneath the noise.

Meet the team →
Share: 𝕏 in
The NextWave SignalSubscribe free

The NextWave Signal

Enjoyed this analysis?

One AI market analysis + one emerging-tech signal, every Tuesday and Friday — written for engineers, PMs, and CTOs tracking what shifts before it goes mainstream.

Leave a Comment