NVIDIA and Emerald AI Collaborate with Energy Leaders to Transform AI Factories into Flexible Grid Assets at CERAWeek 2025

At the annual CERAWeek conference in Houston—an event widely recognized as the Davos of the energy industry—a transformative partnership was unveiled that seeks to redefine the relationship between artificial intelligence and the global power grid. NVIDIA and Emerald AI announced a strategic collaboration designed to transition AI factories from being perceived as static, high-consumption power loads into flexible, intelligent grid assets. This initiative arrives at a critical juncture as the rapid expansion of generative AI puts unprecedented pressure on electrical infrastructure, forcing a reconciliation between the digital revolution and energy sustainability.
The collaboration integrates NVIDIA’s accelerated computing prowess and Vera Rubin DSX AI Factory reference architectures with Emerald AI’s Conductor platform, a sophisticated energy orchestration system. By unifying compute, power networking, and real-time control into a single, cohesive architecture, the partners aim to help large-scale AI deployments connect to the grid faster, operate with higher efficiency, and actively contribute to system reliability. Rather than simply drawing power, these modernized AI factories are designed to respond dynamically to grid conditions, "flexing" their consumption during peak demand periods to fortify the broader energy ecosystem.
The Evolution of the AI Factory as a Grid Participant
Historically, data centers have been viewed by utilities as "base load" consumers—entities that require a constant, unwavering supply of electricity. However, the sheer scale of the new "Intelligence Era" requires a more nuanced approach. NVIDIA founder and CEO Jensen Huang has frequently described the modern computing paradigm as a "five-layer AI cake," where energy serves as the foundational layer upon which chips, infrastructure, models, and applications are built.
The newly unveiled architecture built on the NVIDIA Vera Rubin DSX design and the Emerald AI Conductor platform allows AI factories to generate high-value AI tokens while simultaneously functioning as a demand-response tool. When the grid faces stress—such as during extreme weather events or periods of low renewable generation—the AI factory can intelligently throttle non-critical workloads or shift energy consumption patterns. This flexibility reduces the need for utilities to overbuild expensive peaking infrastructure, which is often carbon-intensive and underutilized.
To bring this vision to fruition, a consortium of the world’s leading energy producers and utilities has joined the initiative. Companies including AES, Constellation, Invenergy, NextEra Energy, Nscale Energy & Power, and Vistra are working to align their generation strategies with this new architectural standard. These collaborations include the development of hybrid projects where power generation is co-located with AI factories. Such proximity accelerates the "time to power"—a metric that has become a significant bottleneck for the tech industry—while delivering stabilizing value back to the regional grid.
Efficiency Metrics: Redefining Tokens Per Second Per Watt
As power constraints become the primary limiting factor for AI expansion, the industry is shifting its focus from raw performance to efficiency. The defining metric of this era is "tokens per second per watt." In the context of large language models (LLMs), a token represents a unit of text, and the efficiency with which a system can generate these units relative to its power consumption determines its economic and environmental viability.
During a recent discussion on the Lex Fridman podcast, Jensen Huang emphasized that while power is a significant concern, it is a challenge that can be solved through extreme co-design. Huang noted that NVIDIA is pushing to improve the tokens-per-second-per-watt metric by orders of magnitude annually. This is not a new trajectory for the company; from the introduction of the Kepler GPU architecture in 2012 to the current Vera Rubin platform, NVIDIA has managed to increase the number of tokens generated within the same power budget by more than one million times.
This massive leap in efficiency is essential because it allows organizations to maximize revenue and lower operating costs without requiring a linear increase in energy consumption. By prioritizing computational efficiency at the hardware and software levels, the collaboration between NVIDIA and Emerald AI ensures that digital infrastructure remains resilient even as the demand for intelligence scales globally.
Accelerating Infrastructure Through Robotics and Digital Twins
The CERAWeek announcements also highlighted how AI is being used to build the very energy infrastructure it requires. This "virtuous cycle" involves using NVIDIA’s simulation and robotics tools to compress the timelines for construction and power generation.
One of the standout participants, Maximo—a solar robotics company incubated by AES—announced the successful completion of a 100-megawatt robotic solar installation at the Bellefield site. Utilizing AI-driven robotics built on the NVIDIA Isaac Sim framework and NVIDIA Omniverse libraries, Maximo demonstrated that autonomous systems can operate reliably at utility scale. These robots can install solar panels with greater speed, safety, and consistency than traditional methods, helping to close the gap between rising electricity demand and the slow pace of manual construction.
In the realm of nuclear energy, TerraPower, in partnership with SoftServe, previewed an NVIDIA Omniverse-powered digital twin platform. This platform is designed to shorten the siting and design timelines for advanced nuclear plants. By applying high-fidelity simulation to early-stage engineering, TerraPower aims to reduce design cycles from several years to just months. This acceleration is crucial for the deployment of Natrium energy plants, which are expected to provide the carbon-free, reliable baseload power required to sustain the next generation of AI factories.
Solving the Power-to-Rack Challenge
The physical integration of massive AI clusters into the grid presents a "power-to-rack" challenge that requires sophisticated engineering. Leaders in industrial infrastructure, including GE Vernova, Schneider Electric, and Vertiv, detailed at the conference how they are using digital twins and validated reference designs to address this.
GE Vernova outlined how its high-fidelity digital twins, aligned with the NVIDIA Omniverse DSX Blueprint, allow utilities to simulate the interaction between grid behavior, substations, and AI factory loads before a single piece of hardware is installed. This system-level modeling is vital for validating interconnection strategies and reducing the risks associated with adding large, variable loads to constrained grid environments.
Schneider Electric and its partner AVEVA have developed new validated reference designs for the Vera Rubin platform. By simulating power, cooling, and control systems within the Omniverse environment, Schneider enables operators to optimize performance-per-watt and validate designs before buildout. This "digital first" approach ensures that AI factories operate more predictably and efficiently at scale.
Vertiv, a specialist in data center cooling and power, highlighted its converged, simulation-ready physical infrastructure. By using repeatable power and cooling building blocks integrated with the Vera Rubin DSX design, Vertiv is reducing the complexity of deployment. This allows for faster, more confident scaling of AI infrastructure, ensuring that the physical hardware can keep pace with the rapid evolution of AI models.
Workforce Development for the Intelligence Era
The transition to an AI-driven energy economy requires more than just chips and wires; it requires a skilled workforce capable of maintaining and operating these complex systems. To address this, Adaptive Construction Solutions announced a national registered apprenticeship initiative in collaboration with NVIDIA.
This program is designed to scale training for critical trades, providing workers with the skills necessary to build and manage AI factories and modernized energy infrastructure. By expanding access to high-demand careers in the energy and tech sectors, the initiative ensures that the labor market can support the rapid buildout of power systems required for the intelligence era.
Broader Implications and Factual Analysis
The significance of the NVIDIA and Emerald AI announcement cannot be overstated. For the past decade, the tech and energy sectors have often operated in silos, with data center developers seeking the cheapest and most abundant power while utilities struggled to forecast and accommodate lumpy, unpredictable demand.
The shift toward "grid-aware" AI factories represents a fundamental change in philosophy. If AI loads can be managed as flexible assets, they become a solution to grid instability rather than a cause of it. For instance, during a summer heatwave, an AI factory could pause the training of a non-urgent foundation model, freeing up megawatts of power for residential air conditioning, and then resume operations during the night when wind power is abundant and demand is low.
Furthermore, the integration of digital twins (Omniverse) into the planning process for both data centers and power plants addresses the "interconnection queue" problem. In many regions, projects are delayed for years because utilities lack the data to understand how new loads will affect the grid. High-fidelity simulation provides the transparency needed to move these projects forward with confidence.
As the global community strives to meet net-zero targets while simultaneously embracing the benefits of artificial intelligence, the collaboration unveiled at CERAWeek provides a technological and strategic roadmap. By treating energy as the foundation of the "AI cake" and building intelligence into every layer of the grid, NVIDIA, Emerald AI, and their partners are ensuring that the world can power the next generation of innovation without compromising the reliability of the systems that keep the lights on.




