Liquid Cooling is ideal today, essential tomorrow, says HPE CTO

Ranganath Sadasiva, CTO of HPE India, discusses how generative AI is reshaping enterprise IT, the urgent sustainability challenges facing data centers, and why liquid cooling is the future of high-performance computing.

author-image
Aanchal Ghatak
New Update
liquid cooling
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

As more industries adopt AI technology, the data center infrastructure demands are growing rapidly, and energy efficiency, heat management, and sustainability are at the forefront. In this exclusive interview with Dataquest, Ranganath Sadasiva, Chief Technology Officer, HPE India, discusses the why liquid cooling is becoming compulsory in the AI generation, provides reasons why sustainability needs to go beyond lip service and shares how HPE is future-proofing enterprise infrastructure with advanced, environmentally sound design.

Advertisment

How do you view the impact of emerging technologies like generative AI and AI more broadly on today’s enterprises?

The impact is absolutely massive. Artificial intelligence—particularly generative AI—is proving to be a democratizing force. It’s accessible to anyone, anywhere, at any time. What’s more interesting is that AI doesn’t work in isolation. It acts as a unifier of technologies—bringing together infrastructure, software, services, and models.

From an outcomes perspective, AI has a transformative effect across the board—whether it’s for individuals, organizations, or even entire nations. It crosses traditional boundaries. This technology is foundational, and I believe we’re part of the first generation truly witnessing its full potential. A hundred years from now, people might look back and reflect on how we were the ones who laid the groundwork.

Advertisment

AI is pushing the boundaries of what infrastructure can handle—especially in terms of power consumption and heat generation. How is HPE addressing these challenges?

That's a really important question, and to address it, we need to step back and understand why this is happening.

There are three or four key factors at play. First, the sheer volume of data we’re generating is massive. We’ve become data-driven enterprises. According to IDC, we’re looking at approximately 181 zettabytes of data being created globally—that's enormous.

Advertisment

Second, in the semiconductor industry, manufacturers are packing more transistors into increasingly smaller footprints. We’ve moved into nanometer-scale chips, with 2nm becoming a reality. While chips are shrinking physically, their processing power and energy demands are rising.

Third, AI workloads are transforming the architecture of data centers. Earlier, these were primarily CPU-based environments. Now, we’re seeing a shift to hybrid CPU-GPU architectures, depending on the application. Some are GPU-heavy, while others strike a balance between CPUs and GPUs. These setups demand more power, generate more heat, and create challenges around space optimization.

This leads us to a very important evolution—liquid cooling. It's ideal today and will soon become essential. It's one of the most effective solutions for managing heat and energy use in modern data centers.

Advertisment

Are there specific challenges in the Indian data center landscape?

In the Indian context, we’re seeing two key trends.

First, there’s a rapid increase in data center footprint—more facilities are being built across the country. Second, with the rise of AI-centric computing, power requirements per rack have increased significantly. We’re moving from standard consumption levels—like 1 kilowatt per rack—to as high as 3 kilowatts or more.

Advertisment

The challenge lies in provisioning that much power and doing it sustainably. Some estimates suggest that data centers, which currently account for about 1% of global power consumption, could rise to 5% if trends continue.

This is why sustainability isn’t just a checkbox anymore—it’s a moral imperative. I often ask our customers: Who do you think the world belongs to? Most pause and reflect. My view is that we’re simply renting the world from our grandchildren. That thought should shape how we design infrastructure today.

So, sustainability needs to be at the top of the priority list—above cost or performance even. We're talking about designing systems that consume less power, deliver more performance, and are automated to optimize efficiency. That’s the challenge we’re trying to solve—managing higher heat densities, improving cooling, and adapting infrastructure to evolving demands.

Advertisment

Are Indian enterprises taking sustainability seriously?

In the last 12–18 months, sustainability has gone from a side note to a consistent part of customer conversations. The challenge now is implementation—and that starts with measurement. You can’t optimize what you can’t track. That’s why we’re building tools and dashboards into our products to help customers quantify energy use, carbon footprint, and ESG impact in real time.

Last year was about hardware refreshes. This year, it’s about embedding sustainability into the design phase—not treating it as an afterthought. We guide customers early, especially when consulting on new builds or expansions, so sustainability is baked in from the start.

Advertisment

We personalize the message. For example, I’ll ask: would you prefer bottled water shipped across the globe or something local and sustainable? When it becomes relatable, it starts to stick. But real momentum comes with regulation. Until sustainability is measurable and mandatory, adoption will vary. That’s where policy, tools, and cross-industry collaboration matter most.

We have to keep reminding people that the planet doesn’t belong to us—it’s borrowed from our grandchildren. That tends to create a sense of responsibility. Still, until it's mandated and measurable, adoption will be inconsistent. Policy and regulation will help, but we all need to play our part.

With AI workloads projected to double by 2028, what’s the long-term outlook for infrastructure?

The AI journey isn’t just about compute—it starts with infrastructure. Is your data center ready for AI-scale workloads? Cooling plays a central role, and traditional air systems can’t keep up with high-density demands. That’s why liquid cooling is no longer optional—it’s essential.

HPE supports both air and direct liquid cooling (DLC), depending on workload needs. While air cooling works for low GPU setups, higher configurations demand DLC. We offer a 100% fanless DLC system that cools everything—from CPUs and GPUs to memory and networking—without relying on air. This innovation powers 7 of the top 10 Green500 supercomputers, thanks to precision thermal engineering tuned for each component.

How does DLC compare with air cooling in terms of performance?

Here’s a simple analogy. Imagine burning your finger. First, you blow on it. That’s like air cooling. But if it hurts more, you run it under water. That’s liquid cooling—faster, deeper, and more effective.

Air cooling works until a point. But as components become denser, with more transistors per chip, air struggles. You’d need to run fans faster and use more chilled air to dissipate heat, which is energy-intensive. Liquid, due to its higher thermal conductivity and density, absorbs and transfers heat much more efficiently.

Some DLC systems use cold plates only on select components. Others use them across the board. There are hybrid solutions too, combining liquid and air. But full DLC systems, like ours, eliminate the need for fans altogether.

How is DLC improving efficiency in HPE customer data centers?

Direct liquid cooling (DLC) is becoming essential as data centers support AI and HPC workloads that demand high performance and density. It offers greater efficiency and sustainability—like at the U.S. Department of Energy’s NREL, which reused 90% of heated water from its HPE Cray supercomputer to heat office and lab spaces. HPE is leading this shift, turning data centers into more eco-friendly, high-performance environments.

How is HPE leading in DLC innovation?

With over 300 patents and decades of experience, HPE is a DLC pioneer. We recently launched the industry’s first 100% fanless DLC system, which powers 7 of the top 10 Green500 supercomputers and cuts cooling power use by 37%. Our new HPE ProLiant Gen12 servers also boost performance per watt by up to 41%, helping customers reduce costs and carbon footprints.

Do you have quantifiable results on DLC’s benefits?

Yes. In one HPC data center with 10,000 servers, air cooling produced over 8,700 tons of CO₂ emissions annually. Switching to liquid cooling brought that down to just 1,200 tons—an 87% reduction. The cost savings? Up to 86% annually. Over time, this prevents nearly 17.8 billion pounds of CO₂ from entering the atmosphere.

What are the hurdles to DLC adoption?

There are a few. First, customer education—many still need clarity. Second, the complexity—managing cold plates, fluid systems, and integration with existing setups takes skill. Third, the cost—while upfront investment is higher, total cost of ownership favors DLC over time.

But the technology is mature. HPE has worked with liquid cooling for decades. It’s time to scale it.

Which industries are adopting DLC faster?

We’re seeing faster adoption in sectors requiring real-time or large-scale compute, especially where CPU-GPU density is high. R&D-heavy industries and data centers with fresh builds are proactively planning for DLC infrastructure.

Conversations that were difficult two years ago are now far easier. Today, enterprises are asking implementation-focused questions:

How will the heat be dissipated?
What infrastructure is needed?
Can we reuse the heat discharged from coolants?

Interestingly, repurposing waste heat is a rising trend. Globally, there are many examples where excess heat is being reused efficiently—for everything from space heating to industrial processes. That’s another advantage of DLC that’s now gaining attention.

What should organizations consider when designing data centers for 100% DLC adoption?

The first step is collaborating with the customer throughout the design process. We consider factors like power requirements, the engineering of the DLC system, and how to integrate renewable energy sources. We also look at system performance, such as the number of CPUs, GPUs, and interconnects, to ensure all components are properly cooled.

It's a holistic approach, and we work on a comprehensive design with the customer, addressing costs and return on investment. Our Wisconsin-based factory specializes in high-performance, fanless DLC systems, which are essential for these larger-scale systems.

How are you collaborating with partners to scale these innovations?

We collaborate with both customers and ecosystem partners. Every solution we design is customized to the specific needs of our customers.

We also work with corporate manufacturers and industry standards organizations to standardize solutions. The goal is to scale these systems globally, especially as the demand for larger systems grows.

To wrap up, how is AI fundamentally changing the nature of data center design?

AI is both a catalyst and a disruptor. It’s transforming compute requirements, architecture, and operational norms. We need hybrid CPU-GPU systems, flexible scaling, advanced cooling, and most importantly—a sustainability-first mindset.

We’re not just building data centers anymore. We’re building the digital backbone of a future we must preserve.

What advice would you give to businesses upgrading their cooling systems?

Make sustainability a priority from day one. Start by auditing your infrastructure, then adopt efficient technologies like DLC, integrate renewable energy, and ensure designs are scalable and compliant. Train your teams for the transition. Liquid cooling will dominate—but the key is aligning cooling strategy with long-term sustainability goals.