Major contributions advance OCP open systems for AI at APAC Summit!

As AI has become a focus for the OCP Community, discussion on the keynote stage will revolve around innovation enabling data centers as AI factories

author-image
DQI Bureau
New Update
OCP
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

The Open Compute Project Foundation (OCP) held its inaugural APAC Summit with an audience of approximately 1500 in Taipei, Taiwan.

Advertisment

As AI has become a focus for the OCP Community, discussion on the keynote stage will revolve around innovation enabling data centers as AI factories, particularly with three significant contributions supporting the OCP’s Open Systems for AI strategic initiative.

These contributions—Mt Diablo, Deschutes, and Scale-up Ethernet—mark critical advancements in supporting the scalability, efficiency, and performance of AI infrastructure globally. In particular, Mt Diablo is now available as an open source specification, and Deschutes and Scale-up Ethernet are in progress and expected to be published soon ahead of OCP Global Summit.

Building on past keynote presentation announcements from OCP EMEA Summit and the launch of OCP’s AI portal, the new contributions add momentum to OCP’s growth of the AI infrastructure ecosystem. These hyperscaler-led contributions signal strong willingness to deploy related OCP-recognized products at scale, which ultimately helps rally multiple vendors in the supply chain around key open systems that become available to all building AI data centers.

Advertisment

Mt Diablo (Diablo 400): Co-authored by Microsoft, Meta, and Google, the Diablo specification (version 0.5 now available) enables groundbreaking high-density AI rack solutions, designed to scale IT racks from 100 kilowatts up to 1 megawatt.

The ORv3-based disaggregated power rack, or sidecar rack, pushes power delivery transformation from 48 volts direct current (VDC) to the new +/-400 VDC. More than simply increasing power delivery capacity, selecting 400 VDC as the nominal voltage leverages the supply chain established by electric vehicles, for greater economies of scale, proven quality, and more efficient manufacturing, by standardizing the electrical and mechanical interfaces.

Deschutes: A contribution in progress by Google, Project Deschutes is a forthcoming specification that complements Diablo by further advancing liquid-cooling technologies for high-performance AI workloads. This fifth generation of Google’s Coolant Distribution Unit (CDU) promises to enhance thermal management and operational efficiency, addressing the growing demands of AI-driven computing environments. Google’s fourth generation CDU is in operation, and the Deschutes design next to it. Additional details will be shared as the specification is finalized and published to the OCP contribution database.

Advertisment

Scale-up Ethernet: Authored by Broadcom, this in-progress contribution dropping soon introduces a new framework for high-performance Ethernet networking tailored for AI scale-up workloads. The Scale-up Ethernet (SUE) framework, optimizes network performance and connectivity, ensuring seamless data transfer and low-latency communication critical for AI applications.

“These contributions underscore the OCP community’s commitment to driving open, scalable, and sustainable solutions for AI infrastructure,” said George Tchaparian, CEO of the Open Compute Project Foundation. “By combining the expertise of industry leaders like Microsoft, Meta, Google, and Broadcom, we are paving the way for transformative advancements in AI data center design and performance.”

As OCP continues to bolster its Open Systems for AI strategic initiative with these contributions, they will also feature in standardized cluster building blocks created in the complementary Open Cluster Designs for AI strategic initiative that has recently begun within the OCP Community. The OCP Foundation invites attendees of the APAC Summit to explore these contributions through technical sessions, demonstrations, and collaborative discussions. These innovations reflect OCP’s mission to foster open collaboration and accelerate the deployment of next-generation AI infrastructure.

Advertisment

With hyperscale operators encountering unprecedented challenges with compute density, power distribution, interconnect and cooling, in building AI clusters composed of racks consuming as much as one megawatt, OCP's collaborative community of over 400 corporate members and 6,000 active engineers is developing open standards to address bottlenecks that threaten to constrain AI infrastructure growth. 

google meta microsoft