AI expansion is straining the power grid: 7 energy investment principles you must know

Energy is the real bottleneck to genuine smart growth.

Author: Joseph Ayoub

Compiled by: Deep Tide TechFlow

**Deep Tide Intro: **Everyone is talking about compute and models, but this article raises a more fundamental question: can energy supply keep up? Morgan Stanley projects that the U.S. will face a 45GW electricity shortfall in 2028, delivery lead times for large transformers have reached 24 to 36 months, and the electricity consumption of AI data centers is growing at 15% per year. From this, the author derives 7 investment theses—from grid fragmentation to solid-state transformers to two-phase cooling. The perspectives are somewhat niche, but they are critical.

The full text is as follows:

NVIDIA recently released a framework called “AI is a five-layer cake.” Today I want to argue that the power/energy layer is the binding constraint on smart growth—and discuss its consequences.

Human civilization advances are the result of our ability to harness tools—whether hammers, fire, horses, printing presses, telephones, lightbulbs, steam engines, radios, or AI. These “tools” are how humanity turns energy into productivity.

At its core, we improve human productivity by capturing energy, directing it to our goals with tools.

In short, the core logic of human civilization progress is:

For most of human history, people relied on human energy and hands as the tools that propelled our goals—whether farming or writing. The printing press is a classic example of how energy and tools co-advance: promoted and popularized by Gutenberg around 1440. Before this innovation, people consumed their own energy to write information by hand using pens (tools), which was extremely inefficient. The printing press introduced a new tool—using mechanical impressions to greatly improve how efficiently human energy was utilized, raising productivity by several orders of magnitude. However, from 1450 to 1800, for nearly 350 years, printing presses saw almost no substantive innovation. Only when humans harnessed a more powerful form of energy—coal—did the equation change on the energy side. In 1814, Friedrich Koenig invented a steam-powered printing press, adapting printing presses to the dominant energy innovation of the time—coal—boosting efficiency by 5x. After that, printing presses continued to adapt efficiently to new energy sources: output rose from 250 copies per hour to 30,000 copies after 50 years, and today it reaches several million copies.

So, the ongoing process continues: innovating new tools, pushing beyond the boundaries of energy harnessing, and improving the efficiency of those tools relative to available energy. This continues to this day. Today, “intelligence” is the new form of productivity we’re focused on; energy is its fuel. The key is whether we can keep pushing smart growth depends on how much sustainable, reliable energy we can produce to power our tools (GPUs) and direct them to our goals (intelligence).

This theme is consistent with the Kardashev scale—which measures a civilization’s technical progress level by how much energy it can harness, from planets to stars to galaxies to the universe, and even to a multiverse. How much energy we can harness indicates how far we’ve progressed as a civilization. Historically, this rule has always held, and the future will be no exception. The ability to harness energy is the fundamental driver of civilization.

The core argument of this article is: energy demand is rapidly outpacing supply, and this is the primary bottleneck to advancing intelligence. I will explore the first- and second-order impacts of this thesis.

Why is energy supply slowing down?

Nuclear fission was discovered in 1939—so far, it is the last major energy-domain transformation we’ve built since the birth of human civilization. However, due to the Chernobyl disaster and global commitments to shift from nuclear power to renewables, a clear mismatch has emerged between tool innovation and energy progress since 1950. Global energy production was 2600GW in 1950; today it is 19,000GW (a 7.3x increase). This may look like a leap, but this gradual linear growth has long failed to match the growth of modern computing and technology—barely even surpassing the 3.5x population growth of the same period.

By contrast, the intervals between breakthroughs in tool innovation are getting shorter. It took 364 years from the first printing press to its next major improvement; 58 years from the first flight to space travel; 20 years from the first microprocessor to the internet; and today, major GPU jumps happen every 2 years. We’re living in a window where tool efficiency improvements are accelerating—so multiple innovations stack on top of each other in increasingly shorter cycles. From AI to cryptography to quantum computing, the pace at which new breakthroughs are discovered is speeding up, and their efficiency gains are becoming more and more dramatic. This is the law of accelerating returns.

Today, data centers account for 1.5% of global electricity consumption and are expected to reach 3% by 2030—covering in 6 years the kind of distance it took steam engines 50 years to cover. The key difference between the Industrial Revolution and the current intelligence explosion is this: during the Industrial Revolution, as demand grew, energy supply was built simultaneously—coal mines, canals, rail networks, and the machines that consumed them expanded in lockstep. Every previous energy revolution built its own supply chain while scaling up. AI inherits an existing supply chain, and that supply chain has already started to crack.

The power grid is fundamentally not prepared to handle a 15% year-over-year increase in electricity consumption from this smart explosion, and U.S. electricity demand has seen almost zero growth over the past decade. Cracks are already showing in the U.S.: the grid interconnection queue has hit the longest ever, delivery time for large transformers has averaged 24 to 36 months, and in 2025 power transformers face a 30% supply shortfall. Morgan Stanley estimates that even just the U.S. alone will face a 45GW electricity shortfall by 2028—equivalent to the electricity demand of 33 million U.S. households. I think this gap may be far larger than that.

The problem is clear: humanity needs to aggressively expand energy scale in order to keep up with innovation leaps in AI, robotics, autonomous driving, and more.

The coming energy shortfall: first- and second-order impacts

The consequences of the coming energy shortfall are historically significant: as energy demand surges and supply falls short, we may see the emergence of quasi-private energy markets.

Hyperscalers have already started building behind-the-meter (BTM) generation facilities and plan to expand into nuclear-powered data centers—this trend is already taking shape. I believe this trend will only become more pronounced.

Below are 7 arguments, all derived from smart explosions and their effects on a persistently tight power supply.

Thesis One: Grid fragmentation—compute moves toward energy, not the other way around

In regions close to demand centers, energy-rich and lightly regulated jurisdictions will capture disproportionate value as energy systems fragment.

When energy demand begins to exceed supply, electricity becomes politically sensitive. Households vote; data centers don’t. Under energy shortfalls, the grid is unlikely to remain neutral; instead, it will place residential electricity demand above industrial demand through pricing, connection limits, or soft caps.

Given that compute is extremely sensitive to latency, uptime, and reliability, it’s fundamentally not viable to run workloads in jurisdictions that prioritize residential power. As grid access becomes unstable or politicized, compute workloads will migrate to behind-the-meter (BTM) generation modes—where power can be directly guaranteed, controlled, and priced.

This will drive a structural shift: compute migrates to energy-abundant, lightly regulated economies. The winners will be entities that can integrate land, interconnectivity, power generation, and fiber into deployable, replicable systems—and the jurisdictions where those systems sit will benefit as well.

Thesis Two: Energy becomes a competitive moat, and BTM self-generation becomes a core capability for distinguishing compute providers

In my view, this is the most critical first-order impact as energy shortfalls worsen. In a world where energy demand exceeds supply, access to cheap, reliable electricity is a structural cost advantage that compounds over time. Moreover, it’s politically unsustainable for data centers to be prioritized for grid electricity—and this is precisely the direction energy policy is heading. Tighter national grid supply will force compute providers to build their own power; hyperscalers have already begun moving in this direction. Any infrastructure without BTM generation will be eliminated outright.

Essentially, companies that own power win, and companies that lease power lose. Without BTM generation, compute providers face power reliability issues (fatal), rising costs, and power limits. Without self-generation infrastructure, pure-play hosted REITs (such as Equinix and Digital Realty) lose value relative to vertically integrated operators. Companies that combine energy generation with compute hosting are building the deepest moats (Crusoe, Iren, and some hyperscalers). This can be framed as a long/short trade, but I’d rather emphasize the winners of vertical integration here.

Thesis Three: Standardization of BTM drives innovation—from conventional transformers to solid-state transformers, from traditional switchgear to digital switchgear

Conventional transformers step up or step down alternating-current grid power. Due to their size and materials, delivery lead times are 24 to 36 months, and they face a 30% supply gap. They’re also a technology from the 1880s, made by hand around constrained materials. The key is that every megawatt of BTM generation must be converted, regulated, and delivered to the compute end—there is no bypass.

Solid-state transformers replace all of that with high-frequency power electronic devices. They’re smaller, faster, and fully controllable, handling AC-to-DC conversion, voltage regulation, and bidirectional current within a single unit. Manufacturing is also simpler, relying on silicon power semiconductors such as silicon carbide/gallium nitride—not massive copper windings and oil-filled tanks. As BTM becomes the standard architecture, the device connecting energy and compute becomes the bottleneck—meaning the solid-state transformer (SST).

Switchgear faces a similar 80-week delay: it’s the control layer between generation and load, responsible for routing power, isolating faults, and protecting systems. Like transformers, switchgear is labor-intensive and made around constrained materials, and since the 1880s it has barely changed.

Digital switchgear replaces all of that with solid-state power electronics. Faster, programmable, and fully controllable—enabling real-time fault detection, remote isolation, and dynamic load routing. Just as importantly, it scales like consumer electronics, not like industrial equipment.

A side note on copper: I’m constructive on copper. Copper is the highway for electronics and will be the top traded commodity in an increasingly electrified world. However, the way this trade expresses itself is subtle: traditional mining companies trade the commodity directly, with low margins and risks of compression over time. At the finished end where copper is non-substitutable and time-constrained, there are major bottlenecks and room for future value accumulation. Cable manufacturers like Prysmian and Nexans sell finished products with constraints, not raw material exposure—and as transformer delivery lead times extend dramatically, this is no longer a pure commodity market.

Thesis Four: The carbon cost of AI is becoming increasingly hard to sustain politically, forcing solutions dominated by solar and batteries

AI builds on a carbon issue that hasn’t been priced yet—this is a political constraint. Data centers raise electricity prices, consume large amounts of water resources, and increase local emissions. This is already showing: a $18 billion data center project was canceled entirely, and a $46 billion project was delayed.

Today, about 56% of data center electricity comes from fossil fuels. Natural gas solves deployment speed problems but is politically fragile. As demand expands, resistance to expanding fossil energy increases, forcing near-term systems that combine natural gas, nuclear, and renewables.

Although natural gas serves as a short-term bridge in the explosive growth of data centers, over a longer time horizon, energy abundance is not solved by fuel extraction—it’s solved by energy capture. The energy the sun delivers to Earth is orders of magnitude higher than what humanity consumes. The constraint isn’t availability—it’s conversion, storage, and deployment.

Solar power is not the immediate solution to compute-related electricity demand—it’s the ultimate solution.

Current commercial solar captures about 22% of incoming energy. Every improvement in conversion efficiency lowers the cost per megawatt, pushing solar closer to levelized cost parity with dispatchable generation inside BTM systems.

Battery energy storage becomes the core component of this architecture—not just to smooth intermittency, but also as a revenue layer. Energy storage arbitrage and load balancing turn what used to be cost centers into profit generators for BTM operators.

In this thesis, the winners are vertically integrated companies that span capture, storage, and delivery: specialized solar developers with BTM contracts, battery manufacturers with grid-level and site-level products, and a small set of operators that can combine their own generation with compute hosting.

Solar is a game of procurement and manufacturing; batteries are the constraint and monetization layer. By integrating capture profits, the frontier technology is still an options-like bet rather than a base-case scenario. In this regard, Tesla may continue to be the biggest winner—but I will limit the analysis to non-consensus tickers.

Thesis Five: Cooling becomes a first-order constraint, and two-phase direct liquid cooling (D2C) will become a must in frontier applications

Another consequence is the rise of two-phase direct liquid cooling technology. To be frank, this thesis is also baked into my own judgment: chip power density is rising along a parabolic trajectory, which is an increasingly difficult thermodynamic problem. Traditional air cooling is fundamentally not sustainable for a variety of reasons; the primary one is that it can’t work at higher-density chips, along with environmental concerns about water and electricity consumption.

First, D2C cooling increases density and performance without being constrained by heat dissipation management—that’s the key issue in scaling. The reality of the current market is that single-phase cooling dominates because it’s simpler: chilled water circulates through cold plates to cool chips, but there’s a known upper limit. When chip power density exceeds 1500W, a shift to two-phase cooling becomes unavoidable. Two-phase cooling pumps dielectric liquid to the area around the chip, designed to boil at low temperatures—dramatically improving cooling efficiency via the phase change from liquid to gas.

Two-phase cooling can reduce energy use by 20% and cut water consumption by 48%. This performance improvement enables more densely packed chip die packaging, increasing performance, and ultimately creating higher demand for high-performance cooling.

Leading two-phase DTC company Zutacore demonstrated two-phase D2C cooling using dielectric liquids (not water), reducing energy consumption by 82% and completely eliminating water usage—results verified by Vertiv and Intel research. Zutacore is a private operator worth watching in this space, and further digging into dielectric liquid suppliers may also be valuable.

Thesis Six: Nuclear power can serve as a bridge toward energy abundance and stable supply, but it isn’t the long-term answer for energy expansion

When writing this article, I initially thought nuclear power was a good way to fill the short-term gap in energy supply. The reality is that the deployment cost of small modular reactors (SMRs) is comparable to natural gas systems multiplied by 5 to 10 (10,000 to 15,000 USD per kW), and in practice they can’t be deployed and scaled at mass scale.

Nuclear power solves the reliability problem—not speed or cost—especially in BTM installations. This allows stable, dispatchable baseload power to be provided in scenarios where reliability is non-negotiable. So nuclear has a role in energy shortfalls—as a bridge, not the core supply.

Nuclear is constrained by the fuel cycle and construction timelines. Today’s advanced reactors require high-assay low-enriched uranium (HALEU), and this fuel supply barely exists commercially at scale. Even if reactors are built, whether fuel can be supplied to them becomes a critical constraint on the speed of nuclear expansion.

Therefore, nuclear power is unlikely to become the marginal solution for energy expansion—it’s slow to go public, capital intensive, and constrained by infrastructure and fuel. By comparison, the fastest-expanding systems—near term: natural gas; long term: solar and storage—are the options that can close the gap.

The investable bottleneck isn’t the reactors—it’s the fuel. As SMR demand expands, high-enrichment uranium enrichment capacity will become a key bottleneck—one that’s independent of specific reactor types. No matter which design ultimately wins, value will accumulate here.

Thesis Seven: A new group of energy infrastructure companies is emerging; vertical integrators will turn electrons into computing power

The bottleneck in AI infrastructure isn’t only energy—it’s also the ability to convert energy into usable computing power at large scale.

In the 1870s, like electricity, oil wasn’t scarce, but the bottleneck was refining and distribution. Rockefeller built a company by vertically integrating crude oil extraction, refining, and distributing it to households—one of the biggest companies ever (Standard Oil).

The smart revolution follows the same pattern: electricity is crude oil. Power may be abundant, but converting it reliably into compute is constrained by power transmission, cooling, interconnects, and permitting. The refining of electrons is where value lies. Every additional layer of ownership improves reliability, lowers costs, and captures profit opportunities—making vertical integration self-reinforcing.

The mega-scale enterprises are the distribution layer of this system and also the endpoint that consumes compute. However, the structural opportunity lies in owning the infrastructure that distributors are forced to buy. This creates a new category of energy infrastructure company—operators that bundle generation, conversion, cooling, and hosting into one.

The clearest expression is vertically integrated operators in private markets, such as Crusoe and Lancium, and native computing platforms in public markets, such as Iren and core Scientific, which already own the hardest-to-replicate underlying assets: energy.

Companies that control the flow of electrons into racks are building the deepest moats in the AI economy. Software can’t swallow physical infrastructure.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin