Engineering India’s AI-first data centres at hyperscale

Rohan Sheth explains how AI and HPC are reshaping India’s data centres, from density and cooling to power economics, sustainability, and hyperscale decision criteria.

author-image
Manisha Sharma
New Update
Rohan-Sheth

Rohan Sheth, Head of Colocation and Global Expansion, Yotta Data Services

Listen to this article
0.75x1x1.5x
00:00/ 00:00

As AI and high-performance computing move from experimentation to production, India’s data centres are being reshaped at a foundational level. Density, cooling choices, power availability, and sustainability economics now matter as much as location and uptime once did.

Advertisment

Yotta is positioning itself at the centre of this shift, aligning hyperscale campus design with the real-world demands of AI and HPC workloads. In this interview, Rohan Sheth, Head of Colocation and Global Expansion, explains how AI infrastructure is materialising on the ground in India—what demand really looks like today, how far density can scale, where air cooling gives way to liquid, and why power sourcing has become the most critical sustainability lever.

AI is driving unprecedented interest in data centres. How much of today’s demand is truly AI-led?

The demand we’re seeing today is clearly skewed toward AI and HPC rather than traditional enterprise migration. Industry data shows that AI-ready capacity already accounts for roughly 70% of incremental global data centre demand and is growing at around a 33% CAGR through 2030—nearly ten times faster than conventional enterprise infrastructure.

Advertisment

At Yotta, this trend is even more pronounced. Nearly 90% of our new demand is AI- and HPC-led, with only about 10% coming from general enterprise cloud or data centre migration. Enterprises are no longer just moving workloads—they are actively looking for AI-enabling infrastructure.

AI demand has overtaken traditional enterprise migration, redefining how data centres are built and scaled.

The earliest use cases are centred on large GPU clusters for foundation model training, fine-tuning domain-specific LLMs, and building high-throughput data pipelines. Over time, we expect inference to overtake training as the dominant workload, pushing AI capacity closer to end users through sovereign, hybrid, and edge deployments.

AI infrastructure demand is no longer speculative—it is structural and accelerating. 

What density levels are you supporting today, and how far are your designs prepared to scale?

Yotta’s data centres are purpose-built for GPU-intensive workloads. Today, we support rack densities in the 50–60 kW range, with infrastructure designed to scale to 120–130 kW per rack as demand increases.

This is enabled through a high-density power and cooling architecture that supports advanced air cooling, rear-door heat exchangers, and liquid-cooling design provisions. Our Tier IV hyperscale campuses—such as NM1 in Navi Mumbai and D1 and D2 in Greater Noida—are built with long-term scale in mind.

Collectively, these campuses are designed to support more than 130 MW of IT power and host over 44,000 high-end GPUs, including upcoming H200, B200, and next-generation AI accelerators. As GPU platforms continue to increase per-rack power density, the next phase involves wider deployment of direct-to-chip liquid cooling and immersion-ready designs.

The key is continuity: customers can scale density within the same campus as their compute requirements evolve.

Where do you draw the line between advanced air cooling and liquid-ready design?

The dividing line is fundamentally driven by rack density and thermal load. High-efficiency air cooling, combined with airflow optimisation and rear-door heat exchangers, remains effective and cost-efficient for moderate to high densities. It also offers operational simplicity and proven reliability.

However, as AI clusters scale and rack densities rise sharply, air cooling becomes less efficient and increasingly energy-intensive. At that point, liquid cooling—either direct-to-chip or immersion—becomes essential to sustain performance and manage heat effectively.

Cooling choices are dictated by physics—air works to a point, liquid becomes inevitable at extreme densities.

Our approach is to design facilities that support both. Customers can begin with advanced air cooling and transition to liquid cooling as workload intensity increases. This phased model allows AI workloads to scale within the same campus, without disruptive retrofits or stranded investments.

Cooling strategy, in the AI era, is about optionality backed by physics.

Sustainability is in the spotlight. What creates the biggest real-world impact without hurting uptime or cost?

The single most impactful sustainability lever is clean, firm power at the grid interconnection level—long-term renewable PPAs backed by firming—because it directly addresses where emissions actually sit.

For hyperscale and AI data centres, 90–95% of lifetime emissions fall under Scope 2, driven by electricity consumption. If sustainability strategies don’t tackle grid power carbon intensity first, their overall impact is limited.

In practice, the hierarchy of impact starts with grid power carbon intensity by a wide margin, followed by facility efficiency, such as PUE and cooling, then hardware efficiency, water optimisation, and finally construction materials.

Decarbonising grid power delivers more sustainability impact than any other data centre intervention.

At Yotta, power efficiency through intelligent design and operations—measured via PUE—has delivered tangible reductions in energy use without compromising uptime. Our facilities operate at PUE levels of around 1.4, among the lowest in the industry, achieved through high-efficiency cooling, optimised airflow, energy-efficient UPS systems, and AI-driven energy monitoring.

Renewable sourcing is equally critical. Yotta D1 currently runs on 100% green energy for its active load, while NM1 operates at around 80% renewable energy—without sacrificing Tier IV fault tolerance or uptime guarantees.

What are the top decision factors customers consider today—and how is that changing with AI?

Today, customers typically choose Yotta based on three core factors: access to large-scale land, power, and fibre; AI-ready infrastructure with reliability at scale; and sovereign, compliant operations.

First, the availability of contiguous land parcels with abundant high-voltage power and bulk fibre has become foundational for AI deployments. Traditional site-selection criteria—like proximity to business hubs—have become far less relevant.

Second, AI readiness is now a decisive factor. Customers are actively evaluating high-density GPU capability and advanced cooling support. Yotta’s hyperscale campuses, spread across more than 700 acres in Navi Mumbai and Greater Noida with over 2 GW of power availability, directly address this need.

Third, uptime and resilience remain non-negotiable. Tier IV certification, 100% uptime guarantees, and fault-tolerant architecture continue to matter deeply for enterprises, government, and mission-critical workloads.

Data sovereignty and regulatory compliance are also rising sharply in importance. For BFSI, public sector, and AI workloads, in-country compute aligned with India’s DPDP Act is now essential.

In the AI era, this decision matrix is expanding further. Customers are increasingly evaluating speed to scale, efficiency at extreme densities, and whether sustainability goals can be met alongside performance requirements.

manishas@cybermedia.co.in