- Global data center electricity consumption reached 415 TWh in 2024 and is projected to reach 945 TWh by 2030 — driven almost entirely by AI workloads growing at 12% annually.
- The US interconnection queue holds over 2,600 GW of pending capacity requests; median wait time has risen from under 2 years in 2008 to over 8 years in 2025, making standard grid connection structurally inadequate for hyperscaler timelines.
- Microsoft, Amazon, and Google have independently contracted directly with nuclear generators — paying 2–3x PJM spot prices for supply certainty — bypassing the public grid rather than waiting in the queue.
- Liquid cooling reduces overhead power draw by roughly 27% versus air cooling, but against 12% annual demand growth that absorbs less than three years of load increase.
Key Claim: The AI infrastructure build-out has produced a power constraint that no GPU shipment can resolve — and the hyperscaler response of contracting directly with nuclear plants is the clearest signal that the standard grid interconnection process has effectively failed for large-scale AI deployment.
In northern Virginia in July 2024, a voltage fluctuation triggered the simultaneous disconnection of 60 data centers. The resulting 1,500 MW power surplus required emergency grid adjustments to prevent cascading outages across the PJM Interconnection — the largest grid operator in the United States. The incident was not a freak event. It was a preview of what happens when roughly 9 gigawatts of always-on load — a figure that has since grown to 12.1 GW — sits inside a grid that was not designed for it, in a region where developers are already quoting seven-year delays for new interconnection agreements. The AI buildout has hit a constraint that no GPU shipment can resolve.
The Scale of Demand No Grid Operator Anticipated
Global data center electricity consumption reached approximately 415 TWh in 2024 — about 1.5% of global electricity use — and has grown at 12% per year for five consecutive years, according to the International Energy Agency’s Energy and AI report. The IEA’s Base Case projects that figure doubling to 945 TWh by 2030 and reaching 1,200 TWh by 2035. U.S. data centers alone consumed 183 TWh in 2024; the IEA projects that rising to over 200 TWh in 2025 and more than 250 TWh in 2026.
Goldman Sachs Research puts the current global data center installed capacity at approximately 55 GW and projects it reaching 122 GW by 2030 — a 165% increase from 2023 levels — driven almost entirely by AI training and inference workloads, according to their data center power demand analysis. The Department of Energy has separately found that data centers accounted for 4.4% of U.S. electricity consumption in 2023 and projects that reaching 12% by 2028. These are not incremental changes to the power system — they represent an industrial demand shock applied to infrastructure with decade-long lead times.
PJM, whose footprint spans 13 states from New Jersey to Illinois, projects its peak demand will grow by 32 GW between 2024 and 2030. All but 2 GW of that growth is attributable to data centers. The grid was not built for this, and the interconnection process that governs how new loads connect to it has collapsed under the pressure.
The Interconnection Queue: A Five-to-Eight-Year Wall
The U.S. interconnection queue held over 2,600 GW of pending capacity requests as of early 2026. The median wait time from initial application to commercial operation has risen from under two years in 2008 to over eight years in 2025. For comparison, a hyperscaler planning a new 500 MW campus today cannot reliably expect standard grid connection before 2033 — by which point the AI model generations it was meant to serve will have turned over multiple times.
Northern Virginia illustrates the failure concretely. Developers are citing seven-year interconnection delays; Virginia utilities require new substations and major transmission upgrades before construction can begin, effectively making power infrastructure the critical path, not the building itself. Virginia’s data center electricity demand reached approximately 12.1 GW in 2025, up from 9.3 GW in 2024, according to Engineering News-Record. In Texas, CenterPoint Energy reported a 700% increase in large load interconnection requests between late 2023 and late 2024.
The regulatory response has been significant but slow. On 23 October 2025, the Department of Energy invoked the rarely-used Section 403 authority to direct FERC to initiate rulemaking on large load interconnection — a step that legal analysts note could save four to six years in project timelines if implemented. On 18 December 2025, FERC issued an order directing PJM to establish three new transmission service categories for co-located loads, addressing how data centers can connect directly to generation assets without flowing power through the public transmission system, per Baker Botts’ analysis of the order. Texas enacted Senate Bill 6 in June 2025, formalising the ERCOT Large Load Interconnection Study process and establishing cost-sharing for required grid upgrades. These are meaningful steps. They are also years behind the demand curve.
Hyperscalers Buy Their Way Around the Queue
The three largest hyperscalers have reached the same conclusion independently: the standard interconnection process cannot meet their timelines, so they are contracting directly with firm, dispatchable generation — specifically nuclear — and in several cases acquiring the land directly adjacent to the plant.
Microsoft’s deal is the most visible. In September 2024, Constellation Energy announced a 20-year, 835 MW power purchase agreement under which Microsoft will purchase output from the restarted Three Mile Island Unit 1 in Pennsylvania. Constellation secured a $1 billion DOE loan toward the $1.6 billion restart project; the plant, which shut down in 2019, is targeted to return to service in 2027. Analysts estimate Microsoft is paying approximately $110–$115 per MWh — roughly two to three times PJM spot prices — for the certainty of firm, grid-connected carbon-free power.
Amazon has taken the co-location approach further. The company purchased Talen Energy’s 960 MW Cumulus data center campus, situated on a 1,200-acre site directly adjacent to the Susquehanna nuclear station in Pennsylvania, for $650 million in 2024. In June 2025, the two companies expanded their arrangement: under the revised PPA, Talen will supply Amazon with 1,920 MW of carbon-free nuclear power through 2042, with full volume expected no later than 2032. Amazon is simultaneously pursuing new SMR capacity: its updated Cascade project near Richland, Washington now envisions 12 reactors producing up to 960 MW; a separate agreement with Dominion Energy targets at least 300 MW near North Anna in Virginia; and an investment in X-energy is aimed at supporting over 5 GW of SMR projects by 2039.
Google’s strategy centres on advanced reactor technology rather than existing plants. In October 2024, Google and Kairos Power signed a Master Plant Development Agreement to deploy a fleet of molten-salt-cooled Gen IV SMRs totalling up to 500 MW by 2035. In August 2025, the Tennessee Valley Authority signed a PPA with Kairos Power for the first 50 MW unit, targeting 2030 — the first agreement by a U.S. utility to purchase electricity from an advanced Gen IV reactor. Kairos Power’s pebble-bed, molten-salt design operates at low pressure, which reduces containment requirements and, the company argues, construction timelines relative to conventional light-water reactors.
Cooling Architecture as a Forcing Function
The power story does not end at the meter. High-density AI clusters have made the interior architecture of the data center itself a power efficiency variable. Air cooling, which dominated data center design until the mid-2010s, struggles above 10–15 kW per rack. NVIDIA’s H100 and H200 clusters routinely run at 30–60 kW per rack; newer configurations push to 130 kW. Average rack power density increased 38% between 2022 and 2024, with the steepest growth in AI and hyperscale deployments.
Liquid cooling — direct-to-chip, rear-door heat exchangers, or full immersion — achieves a power usage effectiveness (PUE) consistently below 1.2 regardless of ambient conditions. Air-cooled facilities typically run at 1.4–1.8 PUE. The gap matters: a 300 MW cluster in a facility running PUE 1.6 draws 480 MW from the grid; the same cluster in a PUE 1.1 facility draws 330 MW — a difference of 150 MW that must be sourced, transmitted, and paid for. By 2024, liquid-based cooling had reached 46% of the overall data center cooling market, and all major new hyperscale builds are specifying it for AI workloads. CoreWeave, which confirmed a 60-day construction delay at its 260 MW Denton, Texas facility in late 2025 due to adverse weather, has designed all new facilities from 2025 onwards for liquid cooling.
The efficiency gains from liquid cooling reduce, but do not eliminate, the interconnection problem. A PUE improvement from 1.5 to 1.1 cuts overhead power draw by roughly 27%. Against a demand curve growing at 12% annually, that is less than three years of growth absorbed.
What to Watch
- SMR delivery schedules. The nuclear deals assume advanced reactor technologies that have not yet been built at commercial scale in the United States. Kairos Power’s first 50 MW TVA unit targets 2030; X-energy’s Xe-100 is targeting similar timelines. Schedule slippage in these first-of-kind builds would expose significant gaps in hyperscaler power planning for the 2030–2035 window.
- FERC co-location rulemaking. The December 2025 PJM order and the DOE-directed preliminary rulemaking are the most consequential regulatory developments in grid interconnection in a decade. Whether FERC’s framework allows large loads to effectively bypass the standard queue — or simply creates a parallel queue with similar delays — will determine whether the 2026–2030 buildout stays on plan.
- Virginia’s legislative response. Reports indicate Virginia lawmakers have proposed bills allowing utilities to delay connecting loads requiring over 90 MW if grid stability is at risk. If enacted, that would make the largest U.S. data center market effectively closed to new capacity for an undefined period. The outcome of such legislation would be a leading indicator of how other constrained markets — Georgia, Ohio, the Carolinas — respond.
- The efficiency wildcard. Recent advances in inference-optimised model architectures suggest that compute requirements per query can be substantially reduced through design choices. If the industry broadly shifts toward more efficient inference architectures, near-term demand forecasts — including Goldman’s 165% figure — may prove high. This is the one scenario in which the infrastructure ceiling becomes self-correcting.
Further Reading
- IEA, Energy and AI — Energy Demand from AI
- Goldman Sachs, “AI to drive 165% increase in data center power demand by 2030”
- RMI, “PJM’s Speed to Power Problem and How to Fix It”
- White & Case, “DOE directs FERC to accelerate interconnection of data centers”
- Baker Botts, FERC PJM Co-Location Order, December 2025
- CNBC, “Google, Kairos Power plan advanced nuclear plant for TVA grid by 2030,” 18 August 2025
This article was produced with AI assistance and reviewed by the editorial team.



