In an interview on the popular BG2 podcast, Microsoft CEO Satya Nadella admitted that the greatest obstacle to AI implementation today isn’t the lack of GPUs, but rather access to electricity and physical data center infrastructure.
“If you can’t plug those chips in, you can have a warehouse full of them and still not be able to turn them on,” said Nadella.
According to Nadella, the challenges around AI infrastructure have shifted from a “chip shortage” to a “power shortage.” Microsoft reportedly has sufficient GPU supplies from Nvidia, which has significantly increased its production capacity, but installation is being delayed by energy constraints and the limited availability of so-called “warm shells” — data center halls that are fully powered, cooled, and ready for hardware deployment.
Together with Sam Altman, CEO of OpenAI, Nadella emphasized that the future of AI will depend more on investments in power generation and efficient chip cooling than on processors themselves.
The issue is not unique to Microsoft. Industry experts warn that the surge in demand for computing power driven by generative AI has led to energy shortages in key data center regions such as Virginia, Oregon, and Texas. Building new facilities requires environmental permits, advanced cooling systems, and access to sustainable energy — all of which significantly slow down expansion.
According to an IEA report, a single AI data center can consume as much electricity as a medium-sized city of 100,000 people. This situation underscores that the AI industry is now facing real-world physical limits, not just those related to computational capacity.
Companies like Amazon, Google, and Meta are investing heavily in renewable energy and private power networks, but in the coming years, the growth of AI will depend increasingly on energy infrastructure — not merely on advances in chip design.

