/dq/media/media_files/2025/06/13/H7DklNgmsw8NGQPPGN5E.jpg)
As AI computing accelerates toward higher density and greater energy efficiency, Compal Electronics has unveiled its latest high-performance server platform: SG720-2A/ OG720-2A at both AMD Advancing AI 2025 in the U.S. and the International Supercomputing Conference (ISC) 2025 in Europe.
It features the AMD Instinct MI355X GPU architecture and offers both single-phase and two-phase liquid cooling configurations, showcasing Compal’s leadership in thermal innovation and system integration. Tailored for next-generation generative AI and large language model (LLM) training, the SG720-2A/OG720-2A delivers exceptional flexibility and scalability for modern data center operations, drawing significant attention across the industry.
With generative AI and LLMs driving increasingly intensive compute demands, enterprises are placing greater emphasis on infrastructure that offers both performance and adaptability. The SG720-2A/OG720-2A emerges as a robust solution, combining high-density GPU integration and flexible liquid cooling options, positioning itself as an ideal platform for next-generation AI training and inference workloads.
Key highlights:
- Support for up to eight AMD Instinct MI350 Series GPUs (including MI350X / MI355X): Enables scalable, high-density training for LLMs and generative AI applications.
- Dual cooling architecture – Air & Liquid Cooling: Optimized for high thermal density workloads and diverse deployment scenarios, enhancing thermal efficiency and infrastructure flexibility. The two-phase liquid cooling solution, co-developed with ZutaCore®, leverages the ZutaCore HyperCool 2-Phase DLC liquid cooling solution, delivering stable and exceptional thermal performance, even in extreme computing environments.
- Advanced architecture and memory configuration: Built on the CDNA 4 architecture with 288GB HBM3E memory and 8TB/s bandwidth, supporting FP6 and FP4 data formats, optimized for AI and HPC applications.
- High-speed interconnect performance: Equipped with PCIe Gen5 and AMD Infinity Fabric for multi-GPU orchestration and high-throughput communication, reducing latency and boosting AI inference efficiency.
- Comprehensive support for mainstream open-source AI stacks: Fully compatible with ROCm, PyTorch, TensorFlow, and more—enabling developers to streamline AI model integration and accelerate time-to-market.
- Rack compatibility and modular design: Supports EIA 19” and ORv3 21” rack standards with modular architecture for simplified upgrades and maintenance in diverse data center environments.
Compal has maintained a long-standing, strategic collaboration with AMD across multiple server platform generations. From high-density GPU design and liquid cooling deployment to open ecosystem integration, both companies continue to co-develop solutions that drive greater efficiency and sustainability in data center operations.
“The future of AI and HPC is not just about speed, it’s about intelligent integration and sustainable deployment. Each server we build aims to address real-world technical and operational challenges, not just push hardware specs. SG720-2A/ OG720-2A is a true collaboration with AMD that empowers customers with a stable, high-performance, and scalable compute foundation.” said Alan Chang, VP of Infrastructure Solutions Business Group at Compal.
The series made its debut at Advancing AI 2025 and was concurrently showcased at the ISC 2025 in Europe. Through this dual-platform exposure, Compal is further expanding its global visibility and partnership network across the AI and HPC domains, demonstrating a strong commitment to next-generation intelligent computing and international strategic development.