/dq/media/media_files/2026/02/25/inside-indias-race-2026-02-25-12-47-33.jpg)
India’s data centre industry has entered a decisive phase. What was once a conversation dominated by space, uptime, and connectivity has shifted sharply towards power density, cooling efficiency, and sustainability at scale. The rise of artificial intelligence workloads, particularly GPU-intensive training and inference models, has fundamentally altered how data centres are designed, built, and operated.
Enterprises, cloud providers, and governments are no longer planning for incremental growth. Instead, they are preparing for step-change demand driven by AI platforms, sovereign cloud initiatives, and the expansion of global capability centres. This shift is pushing infrastructure beyond traditional enterprise norms and exposing structural constraints that were previously manageable, most notably around power availability, grid readiness, and water usage.
While AI is clearly influencing investment decisions, industry leaders are increasingly cautious about overbuilding rigid, narrowly optimised infrastructure.
Across India, new facilities are being conceived as AI-ready by default. This means planning for rack densities far above historical norms, designing cooling systems that can handle sustained thermal loads, and securing long-term power through a mix of grid supply, renewables, captive generation, and storage. The data centre is no longer a passive container for IT; it has become an active, engineered system that must balance performance, resilience, and environmental impact simultaneously.
POWER BECOMES THE PRIMARY DESIGN CONSTRAINT
Power has emerged as the defining bottleneck for the next phase of data centre growth. Large AI-focused facilities can require more than 100 MW, pushing both grid infrastructure and commercial viability to their limits. In many regions, power availability, not land or connectivity, is now the gating factor for expansion.
This reality is driving a shift towards power-led planning. Operators are investing in long-term power purchase agreements, captive renewable assets, energy storage, and behind-the-meter solutions to reduce dependence on constrained grids. In parallel, grid congestion in developed markets is prompting global cloud providers to look towards countries like India, where demand growth aligns with infrastructure build-out.
However, securing power is only part of the equation. The cost, reliability, and sustainability of that power directly influence uptime, scalability, and long-term economics. As AI workloads reduce tolerance for instability, data centres must be engineered to deliver predictable performance even as power systems become more complex and distributed.
COOLING MOVES FROM SUPPORT FUNCTION TO CORE CAPABILITY
As rack densities rise, cooling has shifted from an operational concern to a strategic capability. Traditional air-based cooling struggles to cope with sustained AI workloads, accelerating the adoption of liquid-assisted and hybrid cooling architectures. These systems promise higher efficiency, but they also introduce new operational challenges around maintenance, monitoring, and lifecycle management.
Making liquid cooling mainstream is less about hardware readiness and more about operational maturity. Data centre teams must manage coolant quality, detect leaks early, and coordinate tightly between IT, power, and thermal systems. Retrofitting legacy facilities adds further complexity, requiring modular designs that allow a gradual transition without disrupting live operations.
The most resilient operators are adopting hybrid approaches, retaining air cooling for standard workloads while deploying liquid solutions selectively where density demands it. This flexibility reduces risk, protects capital, and ensures facilities can evolve as AI adoption patterns mature.
SUSTAINABILITY SHIFTS FROM INTENT TO ENGINEERING DISCIPLINE
Sustainability is no longer an aspirational add-on. It is becoming a core design principle. Metrics such as power usage effectiveness, water usage effectiveness, and carbon intensity are now being considered at the earliest stages of planning. Closed-loop water systems, renewable-heavy power mixes, and energy-efficient cooling are increasingly standard rather than exceptional.
At the same time, there is growing scrutiny of whether sustainability commitments translate into real-world outcomes. As AI drives higher baseline energy consumption, data centre operators must demonstrate that growth can be decoupled from emissions. This is pushing the industry towards deeper integration of renewables, storage, and intelligent operations driven by automation and AI itself.
AVOIDING THE RISKS OF OVER-OPTIMISATION
While AI is clearly influencing investment decisions, industry leaders are increasingly cautious about overbuilding rigid, narrowly optimised infrastructure. The risk is not AI adoption slowing, but assets becoming stranded if they cannot support a broader mix of workloads.
Flexible infrastructure, phased expansion, and utilisation-driven design are emerging as best practices. Facilities that can support multiple densities, adapt to evolving platforms, and productise AI infrastructure into reliable services are better positioned to deliver long-term returns. In this context, the biggest challenge ahead is not any single risk, but managing systemic interdependencies across power, cooling, regulation, geopolitics, and security.
INDUSTRY VOICES
AI infrastructure and systemic risk
/filters:format(webp)/dq/media/media_files/2026/02/25/douglas-donnellan-2026-02-25-12-41-21.jpg)
What has been the biggest change in the data centre industry, especially after AI workloads? Is the AI bubble a big risk for data centre infra? How much will it affect data centres if something cracks?
AI workloads are driving greater specialisation in Data Centre design, as their infrastructure requirements differ significantly from traditional enterprise workloads. This has increased demand for power and space, raised costs, and reduced tolerance for instability.
An “AI bubble” is primarily a risk for organisations heavily invested in narrowly optimised, high-density infrastructure. If demand for AI softens, those assets may become stranded. Facilities designed with flexibility to support diverse workloads are more resilient, meaning the core risk is not AI itself, but over-optimising too early and too rigidly.
What’s the biggest challenge ahead- outages, idle capacity, energy impact, geopolitical shifts, data sovereignty, security or something else?
The biggest challenge ahead is managing systemic risk rather than isolated failures. Outages, energy constraints, geopolitical shifts, and data sovereignty concerns are increasingly interconnected, especially as AI workloads raise baseline power demand and reduce tolerance for disruption.
Sustainability moves from concept to execution.
/filters:format(webp)/dq/media/media_files/2026/02/25/narendra-sen-2026-02-25-12-43-01.jpg)
The transition of sustainability from a conceptual idea to one that is actively being executed. In the area of water management, many industries now incorporate closed-loop systems, the use of minimal water to cool, and watershed models for recycling, especially in water-starved areas.
The cooling design of manufacturers is moving away from traditional air-based systems to more efficient liquefied cooling systems (e.g., water) or hybrid cooling architectures that yield significantly increased efficiencies when working with A.I. workloads. The electrical power supply mix is also changing rapidly to move heavily into renewable energy supplies through various PPAs, captive solar production, and storage modes of delivery.
India’s data centre growth and its constraints
/filters:format(webp)/dq/media/media_files/2026/02/25/vineet-mittal-2026-02-25-12-44-13.jpg)
What has been the biggest change in the data centre industry in India, and is India echoing global patterns here?
India has seen an increase in various data centre types, such as energy-efficient, AI-dedicated, and GCC-driven facilities, as well as expanded operations from mega cloud service providers. These changes are primarily driven by two factors: a shortage of available power in developed countries, leading them to seek infrastructure in developing nations, and the unexpected surge in enterprise AI workloads. The latter, especially those based on transformer architectures, demand significant computational power and rely on energy-intensive GPUs, further influencing data centre growth in India.
What is the biggest challenge ahead for the data centre industry in India?
The biggest challenge ahead is the lack of reliable, clean power and firm energy commitments. Large data centres can demand over 100 MW of power. This is enough to power an entire mid-sized town. Without assurance of power at a reasonable cost, profitability is not possible. Cooling costs are enormous and can eliminate profitability for decades. Water usage is also a serious concern due to reliance on conventional air conditioning. India’s electricity grid requires a fundamental redesign. It must store, transmit, and distribute energy at the scale now demanded by AI and data centres. Current sustainability commitments often fail to deliver real energy savings. As AI grows, data sovereignty and regulatory compliance are becoming concerns. Energy, grid readiness, and environmental impact are the primary constraints.
Innovations from Ziroh Labs provide an architecture for running AI sustainably by running models on CPUs. This allows for one-third the power without sacrificing the AI’s speed. More than 100+ AI models have been optimised to perform well across a range of domains, including medical, financial, voice, speech-to-text, logic, math, and reasoning. These architectures do not depend upon liquid cooling, and convection-based cooling is quite sufficient. Additionally, the fixed cost advantage can reach up to 70%. Ziroh Labs provides an alternative so we can conserve precious power and capital and use AI methodically and appropriately.
Designing for flexibility, not hype
/filters:format(webp)/dq/media/media_files/2026/02/25/amit-agrawal-2026-02-25-12-45-06.jpg)
India’s Data Centre infrastructure is broadly moving in line with global trends around higher densities, AI-ready designs, modular construction, and increased automation. That said, the constraint shaping most decisions today is not land availability but power and cooling. As AI workloads increase, including those driven by national initiatives such as the IndiaAI Mission, capacity planning is becoming power-led. Grid readiness, PPA or captive power arrangements, energy storage, and cooling efficiency now have a direct bearing on uptime, scalability, and long-term cost structures.
Sustainability has also moved from being an intent statement to a design consideration. Metrics such as PUE, water usage effectiveness, and carbon impact are being considered early in the design phase.
At the hyperscale level, the focus has shifted to operational resilience. Our Chennai facility, for example, is built around UPS-backed critical loads, water-cooled chillers with adiabatic cooling towers, renewable power integration, and a PUE of 1.35. These choices are driven by reliability and efficiency rather than density alone.
AI is clearly influencing investment decisions, but there is a risk of overbuilding rigid capacity based on short-term assumptions. The more sustainable approach is a flexible infrastructure that can support a range of densities and expand in phases. At Techno Digital, we design for demand-led growth. Our facilities allow higher-density scaling within the same footprint, ensuring capacity additions remain aligned with actual workload adoption rather than speculative demand.”
Utilisation, not announcements, will define success.
/filters:format(webp)/dq/media/media_files/2026/02/25/a-s-rajgopal-2026-02-25-12-46-03.jpg)
Is AI compute demand making data centre investments and strategies myopic and one-dimensional? Any worries about AI bubbles and idle infrastructure?
AI compute demand is certainly reshaping how data centre investments are being planned today. However, the real risk does not come from AI itself; it comes from approaching AI infrastructure in a narrow, one-dimensional way. Simply adding GPU capacity does not create value on its own. For that capacity to be meaningfully productive, the entire stack must be engineered for sustained utilisation, including power density, cooling, networking, storage throughput, platform maturity, and commercially viable service models.
Concerns around idle infrastructure typically arise when GPU capacity is built speculatively, without clear visibility into contracted workloads or long-term demand. Where operators are being more disciplined, we are seeing a more balanced, portfolio-based approach combining AI-optimised environments with core enterprise infrastructure that continues to deliver stable, annuity-style revenue.
Over the next 24–36 months, success in this space will be determined less by how much AI capacity is announced and more by how effectively that capacity is utilised. Flexibility, utilisation-driven design, and the ability to productise AI infrastructure into reliable services will matter far more than hype-led capacity creation.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us