/dq/media/media_files/2026/01/28/vipin-jain-of-ctrls-2026-01-28-10-31-05.png)
As AI workloads redefine compute intensity, India’s data centres are no longer passive infrastructure; they are strategic engines of the digital economy. CtrlS’ Vipin Jain explains how power density, cooling innovation, water discipline, and auditable sustainability are shaping the next generation of AI-ready facilities.
India’s data centre landscape is undergoing a structural shift. What was once a backend enabler is now a boardroom priority, driven by AI, hyperscale cloud adoption, and the growing urgency around energy efficiency and sustainability. As rack densities surge and workloads become more heat-intensive, operators are being pushed to rethink everything, from power architecture and cooling strategies to water stewardship and ESG accountability.
CtrlS, long associated with enterprise-grade resilience and Rated-4 reliability, is positioning itself for this new phase of AI density. In this conversation, Vipin Jain, President – Hyperscale Growth, Delivery & Innovation at CtrlS Datacenters, outlines how the company is preparing for ultra-high-density AI workloads while maintaining efficiency, operational discipline, and measurable sustainability outcomes.
CtrlS is often seen as synonymous with resilience. How is that evolving as AI pushes density limits?
The role of resilience hasn’t diminished; in fact, it has become more critical. What has changed is the operating context. AI, HPC, and GPU-driven workloads place sustained stress on power and cooling systems in ways traditional enterprise workloads never did.
At CtrlS, we design infrastructure to support a wide spectrum of densities, from conventional enterprise environments to hyperscale and AI-centric deployments. Today, our AI-ready facilities support rack densities starting at around 30 kW and extending beyond 100 kW, with advanced configurations capable of reaching up to 250 kW per rack.
As density increases, design philosophy must evolve. Power infrastructure, backup systems, and cooling can no longer be treated as independent layers; they have to be tightly integrated. Our facilities use modular and scalable power and cooling architectures that allow us to expand capacity without disrupting live environments. Rated-4 resilience is non-negotiable, even under continuous, high-density AI workloads.
The real focus is flexibility. Customers shouldn’t be forced into an all-or-nothing transition. Our approach allows them to move gradually to higher densities while preserving uptime, efficiency, and performance.
High-density AI infrastructure is less about brute force and more about disciplined engineering that sustains reliability at scale.
High-density workloads demand integrated power, precision cooling, and resilience that can sustain continuous AI operations.
Cooling is becoming the defining constraint for AI data centres. How is CtrlS approaching the shift from air to liquid?
A cooling strategy has to anticipate tomorrow’s workloads, not just solve for today. Air-based cooling, when optimised properly, remains highly effective for moderate densities and continues to be our baseline thermal solution across operational facilities.
However, as rack densities rise beyond the practical limits of air cooling, the equation changes. That’s why our AI-ready infrastructure is designed with modular, liquid-ready layouts from the outset. We can support traditional air cooling, direct liquid cooling, and immersion cooling within the same facility, depending on workload requirements.
The trigger for liquid cooling is not a trend; it’s physics. When power densities and heat loads cross certain thresholds, particularly with GPU-heavy AI workloads, liquid solutions become the most efficient and sustainable option. Liquid cooling also opens opportunities for better energy efficiency in high-performance zones.
By designing for liquid readiness at the planning stage, we avoid disruptive retrofits. This enables phased investment, operational continuity, and a future-proof path as AI workloads continue to evolve.
Designing data centres as liquid-ready from day one enables seamless scaling without retrofits or operational disruption.
Sustainability claims are under intense scrutiny today. How do you ensure customers can audit what you report?
Sustainability has to be measurable, transparent, and verifiable—especially for customers who must disclose ESG metrics to regulators and investors. At CtrlS, we embed measurement and third-party validation into our sustainability framework.
We publish sustainability reports aligned with global standards, tracking metrics such as energy consumption, PUE improvements, and carbon performance. These reports are designed to be directly usable by customers for their own ESG disclosures.
Our facilities hold internationally recognised certifications, including ISO 14001 for environmental management, and multiple sites have achieved LEED Platinum certification. These are not symbolic; they validate that our operational practices meet rigorous global benchmarks.
On renewable energy, we’ve taken tangible steps. GreenVolt1, our captive solar farm in Nagpur, was among India’s first such initiatives and supplies clean energy to our campus. We’ve also signed an MoU with NTPC Green Energy to develop up to 2 GW of renewable projects, significantly strengthening our long-term renewable sourcing strategy.
Together, certified processes, transparent reporting, and renewable investments give customers confidence that our sustainability claims stand up to audit and international ESG standards.
Auditable sustainability is no longer a differentiator; it’s a baseline expectation.
Water stress is a growing concern, especially in Indian summers. How do you balance cooling performance with conservation?
Water management is central to both sustainability and operational resilience. Our strategy is built on reuse, recycling, and disciplined operational controls rather than increased freshwater consumption.
Across our facilities, we deploy advanced water recycling systems and rainwater harvesting. In Mumbai, for example, we use recycled grey water for cooling tower operations, significantly reducing dependence on freshwater sources while maintaining performance.
At key sites—including Mumbai, Hyderabad, Noida, and Bengaluru—between 70% and 90% of water is harvested and/or recycled. Cumulatively, we’ve recycled nearly 10 billion litres of water across our operations.
Even during peak summer conditions, we don’t default to higher freshwater usage. Effective cooling design, reuse strategies, and operational discipline allow us to maintain performance for mission-critical and high-density workloads without compromising environmental responsibility.
Water stewardship, like energy efficiency, has to be engineered into the system, not managed reactively.
Verified renewable energy use, certified processes, and transparent reporting are now essential for enterprise ESG accountability.
Enterprises often talk about “AI data centres” as a separate category. What’s the biggest misconception here?
The most common misconception is that AI data centres are fundamentally different entities. While AI workloads do increase density, power, and cooling demands, the core principles of reliability, uptime, and efficiency remain unchanged.
AI readiness is not about branding; it’s about engineering and operations. Supporting AI workloads requires scalable and resilient power delivery, precision cooling, and flexible designs that can handle GPUs and accelerators efficiently over sustained periods.
Simply adding more compute without addressing these fundamentals leads to inefficiency and risk. The focus must remain on mission-critical resilience, cost-effective energy management, and sustainability. When designed correctly, AI workloads can scale reliably without undermining operational or environmental goals.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us