Infineon to revolutionize power delivery architecture for future AI server racks with Nvidia

New system architecture significantly increases energy-efficient power distribution across data center, and allows power conversion directly at AI chip in server board

author-image
DQI Bureau
New Update
igigd
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

Infineon Technologies AG is revolutionizing the power delivery architecture required for future AI data centers. In collaboration with Nvidia, Infineon is developing the next generation of power systems based on a new architecture with central power generation of 800 V high-voltage direct current (HVDC).

Advertisment

The new system architecture significantly increases energy-efficient power distribution across the data center and allows power conversion directly at the AI chip (Graphic Processing Unit, GPU) within the server board. Infineon’s expertise in power conversion solutions from grid to core based on all relevant semiconductor materials silicon (Si), silicon carbide (SiC) and gallium nitride (GaN) is accelerating the roadmap to a full scale HVDC architecture.

This revolutionary step paves the way for the implementation of advanced power delivery architectures in accelerated computing data centers and will further enhance reliability and efficiency. As AI data centers already are going beyond 100,000 individual GPUs, the need for more efficient power delivery is becoming increasingly important.

AI data centers will require power outputs of one megawatt (MW) and more per IT rack before the end of the decade. Therefore, the HVDC architecture coupled with high-density multiphase solutions will set a new standard for the industry, driving the development of high-quality components and power distribution systems.

Advertisment

“Infineon is driving innovation in artificial intelligence,” said Adam White, Division President Power & Sensor Systems at Infineon. “The combination of Infineon's application and system know-how in powering AI from grid to core, combined with Nvidia's world-leading expertise in accelerated computing, paves the way for a new standard for power architecture in AI data centers to enable faster, more efficient and scalable AI infrastructure.”

"The new 800V HVDC system architecture delivers high reliability, energy-efficient power distribution across the data center,” said Gabriele Gorla, VP of system engineering at Nvidia. “Through this innovative approach, Nvidia is able to optimize the energy consumption of our advanced AI infrastructure, which supports our commitment to sustainability while also delivering the performance and scalability required for the next generation of AI workloads.”

At present, the power supply in AI data centers is decentralized. This means that the AI chips are supplied with power by a large number of power supply units (PSU). The future system architecture will be centralized, making the best possible use of the constraint space in a server rack. This will increase the importance of leading-edge power semiconductor solutions using fewest power conversion stages and allowing upgrades to even higher distribution voltages.

ai data-centers nvidia infineon