PUE is not a grapefruit metric, anymore

So what are the new high-hanging fruits for data centre strategists today? And are players going after them?

author-image
Pratima H
New Update
Narendra-Sen-INTERVIEW
Listen to this article
0.75x1x1.5x
00:00/ 00:00

Narendra Sen, Founder & CEO, RackBank & NeevCloud takes us on a walk through the fresh data centre landscape in India- showing us how 100 kW to 150 kW per rack densities, significance of power, data centre physics, AI grid models, PPAs, captive set-ups, localisation and vertically-integrated stacks are shuffling a lot of pieces in what was once just an IT backyard but is now an AI factory playground.

Advertisment

What are the top two to three changes that you see in the data centre landscape?

The landscape of Data Centres is shifting in three major ways. The most fundamental shift is that power has replaced connectivity as the new currency. Historically, data centres followed fibre landing stations—Mumbai and Chennai were the defaults. We are seeing a shift from “Carrier-Neutral” to “Power-Neutral” and “Carbon-Neutral” facilities. Secondly, consider the redefinition of ‘density’. A few years ago, 8-10 kW per rack was high density. Now, with NVIDIA’s H100s and upcoming Blackwell architecture, we are engineering for 100 kW to 150 kW per rack. This isn’t an upgrade; it’s a complete structural rethink of the data centre physics—from floor loading to cooling loops.

Power has replaced connectivity as the new currency.

Third involves Sovereignty. We are moving from being a “data warehouse” for global tech giants to becoming an “AI Factory” for India. The conversation has moved from just storing data locally to computing it locally on indigenous clouds.

Advertisment

Any of your own examples and overall observations on the industry movements in areas like High-density design choices and capacity planning?

As providers deploy more high-density racks based on the latest technological advances, they would need to strategies the design of their facilities to properly manage the heat generated by these racks, which are typically designed for 30–60 kW of power per rack (or more). This requires changing all aspects of a facility; power architecture, cooling systems, floor plan layouts, and structural design.

In terms of a facility’s capacity planning, the focus is now shifting from a linear growth model to a modular/scalable model, which allows operators to better align capex with the actual demand they are experiencing.

What about power availability, PPAs/Captive models, data sovereignty/localisation and integrated stacks?

The availability of power has now become the most critical constraint on growth. As a result, operators are establishing new models of power purchasing (e.g., long-term PPAs), captive renewable power and grid-renewable hybrid strategies, which will become a common feature of facilities in the near future.

Carbon neutrality is not an add-on but a necessity. Large-scale AI compute will not be viable unless it is environmentally responsible.

Data sovereignty and data localisation are no longer viewed as regulatory checkboxes, instead they are now considered core elements of a facility’s architecture. A perfect example of this is BFSI (Banking, Financial Services, and Insurance), as well as government agencies and AI companies. I firmly believe in having vertically integrated stacks that combine the resources of the Cloud, Data Centre, Network, and Security as they deliver repeatable performance, regulatory compliance, and cost efficiency.

Why are you opting for choices like sovereignty as a core feature, and carbon-neutrality as a key aspect of your solutions?

Sovereignty is foundational to our model because India’s next phase of digital growth especially in AI, fintech, and public digital platforms requires data to remain within Indian borders, under Indian legal and security frameworks.

Carbon neutrality is not an add-on but a necessity. Large-scale AI compute will not be viable unless it is environmentally responsible. From the design stage itself, we prioritise energy-efficient infrastructure, renewable integration, and carbon-neutral operations to ensure sustainability at scale.

What is changing - in a substantial and scalable way on sustainability- specially with water management, cooling mechanisms, energy sources, and PUE?

The transition of sustainability from a conceptual idea to one that is actively being executed. In the area of water management, many industries now incorporate closed loop systems, the use of minimal water to cool, and watershed models for recycling, especially in water starved areas.

The cooling design of manufacturers are moving away from traditional air-based systems to more efficient liquefied cooling systems (e.g., water) or hybrid cooling architectures that yield significantly increased efficiencies when working with A.I. workloads. The electrical power supply mix is also changing rapidly to move heavily into renewable energy supplies through various PPAs, captive solar production, and storage modes of delivery.

The Power Usage Effectiveness (P.U.E.) metric has gone from a grapefruit metric reported as a cost of operation to a targeted outcome when designing new buildings so that their layout, automation, and management of the facility with artificial intelligence operating the building results in best-in-class efficiencies.

Are data centres moving forward well on issues like outages, downtime, redundancies, security loopholes, AI-readiness etc.

Improvements in uptime, redundancy and security have come a long way, but the expectations are now much greater as Artificial Intelligence (AI) systems expect less tolerance for downtime. As a result, operators are investing in multi-layered redundancy, predictive maintenance, and intelligent monitoring systems. Also, security is no longer just physical, and network based; AI systems have introduced new requirements for cybersecurity frameworks that are aware of AI (i.e., protecting AI models from data poisoning, protecting AI models in terms of their integrity and sovereign compliance).

Compare AI to the internet boom. There was a bubble, but the internet didn’t go away; it became the utility. I don’t worry about idle infrastructure because India is data-rich but compute-poor.

Therefore, when considering readiness to deploy AI solutions today, the focus should not merely be on power and cooling but also on operational resilience, automation, and secure design infrastructure.

Is AI compute demand making data centre investments and strategies myopic and one-dimensional? Any worries about AI bubbles and idle infrastructure?

A legitimate threat exists for short-term speculative investment driven exclusively by AI hype. But compare AI to the internet boom. There was a bubble, but the internet didn’t go away; it became the utility. I don’t worry about idle infrastructure because India is data-rich but compute-poor.

We do, however, anticipate that disciplined investors who create flexible and multi-functional infrastructures that can support AI, Cloud, and enterprise workloads will remain viable businesses. While AI represents a long-term structural shift, the decisions we make surrounding infrastructure must be based on real workload information and an understanding of ecosystem readiness, as well as sustainable economics, rather than merely on headline demand forecasts.

What model would prevail and grow: AI-grids/factories or private infrastructure growth for data centre needs?

Both models will coexist, but in different contexts. AI grids or AI factories will grow for hyperscalers, national platforms, and large-scale model training where shared, centralised compute makes sense.

At the same time, private and sovereign infrastructure will see strong growth, particularly for enterprises, regulated sectors, and government use cases that require control, compliance, and predictable performance. In India’s context, a hybrid, distributed model combining regional data centres with sovereign AI-ready infrastructure is likely to prevail and scale sustainably.

pratimah@cybermedia.co.in