ARTICLE

From Grid Liabilities to Grid Resources: Unlocking Data Center Flexibility

How software tools are improving data center speed-to-power while easing grid strain

Agustina Soriano Sergi
Associate — Ventures
Eileen Waris
Principal — Ventures
November 20, 2025

As data centers take on a historic role within the energy ecosystem, hyperscalers and utilities alike are racing to transform data centers into grid resources rather than grid liabilities.  This fall, we gathered with some of the top voices in the space at Energize's Sutainable Innovation Summit: Data Centers to examine how AI’s growing compute demands intersect with grid infrastructure, policy, energy optimization, and more. Here, we share their perspectives, together with Energize's research, on how data center flexibility may be the lynchpin of the next chapter of AI deployment.

The AI Gridlock

For the first time, data centers are becoming a kitchen table problem. The race to build enough compute infrastructure to meet the AI boom has created a historic moment in energy demand, requiring a level of investment in the grid that the U.S. hasn't seen since the postwar era. Where grid planning and orchestration once followed a predictable cadence of expansion, today’s energy ecosystem is now a mix of shifting priorities, complex dynamics, and opaque legacy technologies. In the past year alone, grid interconnection requests by large power users have risen by 227%, with many of those requests exceeding a gigawatt each; that’s the equivalent of adding a small city to the grid or half the capacity of the Hoover Dam.

And these hyperscalers want power fast. As firms compete to gain market share and establish themselves as foundational AI models, every month of project delay can be costly. The “move fast and break things” tech ethos is colliding with a “zero-fault” utility industry, creating pressure to reimagine energy generation and transmission infrastructure to be faster, more efficient, and smarter than ever before.

To meet this moment,  a new suite of solutions will be essential to not only increase speed-to-power, but to do so in a way that creates a more reliable, digitally enabled, decarbonized energy economy. This can happen in a few ways:

  1. Planning and Deployment: By modernizing interconnection systems and improving grid visibility, software solutions can increase data center speed-to-power while avoiding infrastructure overbuild and grid strain.
  2. Energy Usage Flexibility: Managing a data center as a flexible grid asset can lower the extremity of peak events and reduce the need for infrastructure redundancy and buffer margins.
  3. Compute Optimization: Optimized compute processes can stabilize system operations and improve efficiency to reduce overall load.

By improving these factors, data centers can shift from grid liabilities to grid assets, delivering compute without harming the physical infrastructure we rely on.

The Role of Software Solutions

Here are few of the ways we see software solutions stepping in to increase speed-to-power while easing grid strain:

1) Synchronizing power and development

Limited capacity and queue visibility is one of the biggest constraints on efficient power buildout. Across ISOs and regions, interconnection queues for large loads are opaque and hard to predict, making datacenter siting and planning increasingly difficult. Developers are often forced to make multi-hundred-million-dollar siting and procurement decisions without granular visibility into where capacity is actually available. The lack of transparency in queue data and upgrade timelines can add years to projects and distort regional siting strategies.

Software Solution: Software that clarifies where and when power will be available, and what kind of flexibility commitments would unlock earlier or larger energization, is being built to let utilities, hyperscalers, and producers operate from a shared map. Tools like Energize portfolio company Nira Energy bring transparency to queues and timelines so teams can align construction plans with real transmission upgrade timelines, avoid bubble dynamics in congested zones, and accelerate speed-to-power.

2) Orchestrating energy usage to reduce grid stress

Though planners often view data centers as flat load assets, they are, in reality, highly variable energy consumers. As a result, when a new data center is deployed, power providers build enough infrastructure to accommodate peaks that occur only 1-2% of the time. This leaves the system underutilized the remaining hours. By managing overall energy load and curtailing operations during peak events, data centers can reduce their overall impact on transmission infrastructure, thereby requiring less grid build-out and accelerating time-to-power. In a recent paper, Google Head of Market Innovation Tyler Norris claimed that curtailing just 0.25% of annual data center energy use during peak hours could free up 76 GW of load – the equivalent of powering 50 million homes.

Software Solution: Orchestration platforms are emerging as the possible digital middleware between utilities, cloud providers, and operators, turning data centers into flexible loads that can interconnect faster and respond quicker to grid signals. By integrating load management directly into compute scheduling and data center operations, these tools aim to modulate workloads in real time, shifting noncritical processing and systems to periods of lower demand or higher renewable availability. By curtailing demand during a few key moments of grid stress, these solutions can ease the infrastructure required to serve each new facility, reducing the need for costly transmission upgrades and enabling faster, cleaner deployment of data centers without compromising uptime. In one example, a collaboration between NVIDIA, Emerald AI, EPRI, Digital Reality, and PJM aims to build the first power-flexible AI Factory; Emerald AI purports that such a model could unlock 100GW of capacity on the existing electricity system if adopted nationwide.

3) Optimizing compute resources to maximize capacity

Most data centers are designed for uptime, not flexibility. Redundant systems, always-on configurations, and conservative operating margins keep server utilization well below 100% for most hours. This rigidity leads to enormous untapped potential. Compute demand is highly variable, especially in AI inference, where workloads spike and dip in milliseconds. This “lumpiness” makes it difficult to forecast true power needs, leading operators to over-provision equipment and grid capacity that largely goes unused. By optimizing workloads and identifying stranded compute capacity, software can help upsize existing power budgets and maximize the efficiency of asset that are already installed. This can lead to higher utilization of existing servers and drive bigger revenue opportunities for customers.

Software Solution: New orchestration tools are optimizing not just energy usage, but compute itself. Platforms like Neuralwatt use AI-based, power-aware scheduling to smooth load volatility, dynamically tune GPU performance, and coordinate across HVAC, IT, and battery systems to maximize throughput per watt. By aligning compute intensity with real-time grid and workload conditions, these systems recover stranded power headroom and convert it into usable compute. This unlocks 20–30% more productive capacity from existing assets, resulting in higher output with lower energy waste.

Conclusion

The growth of AI has pushed data centers to the center of the energy conversation. As digital infrastructure becomes one of the fastest growing sources of demand on the grid, it can also become one of the most powerful levers for resilience. With the right software stack to synchronize power and development, orchestrate flexible loads and optimize power-to-compute, data centers can evolve from grid liabilities into active grid assets.