To meet the twin demands of exponential cloud growth and an unprecedented AI demand curve every global Hyperscale data center firm is pressing ahead with new data center projects and acquisition of suitable colocation capacity.

The exponential growth of training and inference data for AI and machine learning demands an unprecedented amount of computational power. IT infrastructure providers are warning that once the models begin use for actual problem solving we will rapidly enter the ‘Exascale’ era.

AI computational power is being measured in terms of Exaflops (a ‘flop’ being floating-point operations per second and ‘Exa’ being one quintillion operations per second) and data storage being measured in ‘Exabytes’ (one billion gigabytes).

Exaflops and exabytes may be unfamiliar terms to data center engineers who are more used to discussing kWs, MWs and for AI – GWs. They are the new computational scale that will require leaps in power provision for servers and cooling.

In computational power terms it can be considered an arms race. Expect more and bigger announcements.

But such a race raises pressing questions for operators: Could escalating power demand become a critical obstacle to AI digital infrastructure development?

In short: Where will the power for these workloads be sourced?

Companies are looking to solutions for grid independent power. That means technologies such as the Power of 10 that combine 10MW gas engine units with specially designed alternators, large scale power conditioning and stabilisation technology from Piller and proven static and rotary UPS technologies that can reside close to the load and inside the data center campus.

AI data centers and microgrids will exist together in partnership. Piller, as part of Langley Holdings Power Solutions division has the technology components for power generation. storage and delivery that match the scale requirement across hyperscale AI data centers and commercial colocation AI developments.

In parallel to this fundamental shift seeing data center campuses of 500MW becoming the new normality it is also accelerating the demands for static UPS back up and sustainable energy storage in the traditional data center space.

Who’s got the power?

In the hyperscale market, while remaining discreet about it in public, tech titans such as Meta, Microsoft, Google, and AWS (Amazon Web Services) are exploring grid-independent power generation options as a strategic move to sustain their AI ambitions.

To address this, Hyperscalers are investigating smart grid technologies, microgrids, and advanced power management systems to regulate and optimize energy distribution within their facilities.

Stabilizing power fluctuations within data centers is a paramount concern. AI workloads may lead to erratic power usage patterns, demanding robust stabilization mechanisms.

Whatever the power sources companies will require advanced UPS, conditioning and stabilisation technologies capable of seamlessly transitioning between power sources, and ensuring uninterrupted operations are vital.

What part commercial colo third party power?

Securing power also raises questions for colo and cloud providers.

For the long term can big colo companies find sufficient power to host AI purely by sourcing renewable energy?

Can Hyperscalers build or buy many 100s of megawatts of green power through PPAs and RECs (Renewable Energy Certificates) to run their vast AI platforms?

Is it not more likely that a hybrid mix of grid access where available, clean on- site power generation, stabilisation and conditioning working alongside and interacting with RERs is a more achievable, and feasible solution?

Sustainable AI

By investing in renewable energy sources for their data centers, Hyperscalers aim to minimize their environmental impact. Such projects include Google’s Renewable Energy Buyers Alliance, Microsoft’s carbon-negative pledge, and AWS’s investments in renewable energy projects.

This goal of achieving energy independence will come combining renewables connected to Microgrids with the support of on-site power generation in proximity to renewable energy resources.

All are part of a new understanding of power generation, conditioning, stabilisation, energy storage and back up.

New Power Strategies

The future of AI heavily relies on the seamless, efficient, and sustainable power supply to colossal data centers.

Grid independence in hyperscale AI data centers involves integrating renewable energy, implementing energy storage solutions, employing microgrids with on-site power and conditioning, and leveraging AI itself to optimize energy consumption and ensure uninterrupted operations

The shift towards energy independence signifies a transformative phase for hyperscale companies as they reshape the AI infrastructure data center land scape.

With talk of 100MW-300MW total capacity and GPU rack densities of 300kW the sheer scale of demand makes it hard to envision a future where AI data centers solely rely solely on traditional grids or new renewable energy sources with standard back up power chains of highspeed gensets, static UPSs and battery energy storage.

Grid independence through hybrid renewable Microgrids for localized power generation and distribution will provide the sustainability and energy security required.

The increasing demands of AI technologies have catalysed a major shift in the strategies of hyperscale companies and their colocation partners.

As tech giants steer towards sustainable and reliable power solutions the wave of AI demand hitting energy infrastructure is marking the beginning of a new era in technological innovation to ensure the growing energy needs for AI strategies can be met.

For the new era of AI data center building – whether through hyperscaler or other developer investment players – the Power of 10 technologies were developed to provide the energy for sustainable AI delivery over the long term.

Connect with Piller to find out more about the Power of 10 and Piller’s latest M Series Static UPS for data centers.


Share blog post