/dq/media/media_files/2025/08/11/anjani-kumar-kommisetti-2025-08-11-10-34-34.jpg)
Anjani Kumar Kommisetti, Head of Business, Rhine XCircle
Where—and why—do modular and prefab data centres align with the trends, appetite and challenges that dot the data centre industry today?
Modular and prefabricated data centres are rapidly becoming integral to the evolving digital infrastructure landscape, and the reasons are both compelling and urgent. Today, the industry is under immense pressure to deliver reliable, scalable, and high-performance data infrastructure – not in years, but in months. This shift is driven by a confluence of trends: the expansion of digital services into Tier 2 and Tier 3 cities, the rollout of 5G, the acceleration of Industry 4.0, and the growing footprint of IoT and green energy initiatives.
At Rhine XCircle, we see time-to-market as the new benchmark of value. Businesses can no longer afford to wait 18–24 months to bring a facility online. Whether it’s for hyperscalers, telecom, utilities, or manufacturing, the demand is for plug-and-play, factory-integrated modules—be it power, cooling, or IT infrastructure—that can be deployed in record time with minimal on-site effort.
Is it more than a temporary trend? Why or why not?
The shift from ‘custom build’ to modular build is more than a trend; it’s a response to a new reality where agility, scalability, and reliability must come together under extremely tight timelines. Our prefabricated solutions not only reduce deployment timelines significantly, often to under 12 months for large-scale facilities, but also enable standardised quality, reduced project risk, and easier scalability.
Simply put, modular and prefab data centres meet the moment by enabling speed, resilience, and flexibility – exactly what today’s interconnected, time-sensitive ecosystems demand.
What have been your most interesting pilots or adoption areas so far?
While we’re currently in the early stages of our journey at Rhine XCircle, the response to our modular and prefabricated data centre solutions has been both encouraging and energising. Since launching our first product and setting up the company recently, we’ve already engaged with a diverse set of early adopters across industries.
Some of the most promising pilot engagements are with organisations in mining, Tier 2 city data centre projects, and educational and research institutions. Particularly in research and AI/ML domains, we’re seeing strong interest due to the integrated, scalable nature of our solutions. These institutions value the plug-and-play flexibility of our offering, especially for infrastructure that must support evolving computational workloads.
Businesses can no longer afford to wait 18–24 months to bring a facility online. Whether it’s for hyperscalers, telecom, utilities, or manufacturing, the demand is for plug-and-play, factory-integrated modules.
Customers have been quick to see the potential for fast adoption, scalability, and adaptability to different environments, be it remote industrial sites or urban academic campuses. We’re currently in advanced stages of evaluation and customisation, and we anticipate our first set of live deployments going operational in the coming quarters. The traction we’re seeing affirms that modular is not just a trend; it’s fast becoming a strategic necessity.
Data centre-as-a-service and customised compute—how well do they work in terms of what enterprises need—and the economics of it all?
The concept of Data Centre-as-a-Service (DCaaS) has long been validated by hyperscalers and co-location providers. Many industries—especially BFSI, e-commerce, and IT—have already embraced off-prem hosting and cloud-based models for their flexibility and scalability. However, we’re seeing a notable shift in industries like manufacturing, R&D, education, and IoT-driven operations, where total cloud dependency doesn’t always meet the latency, compliance, or control requirements.
This is where the next evolution of DCaaS is taking shape—modular and prefabricated data centres delivered as-a-service, directly on customer premises. This model gives enterprises the best of both worlds: the control and low-latency performance of an on-prem deployment, combined with the flexibility and cost-effectiveness of a service-based approach.
Elaborate this for a vertical.
For example, in manufacturing environments with a high density of IoT devices, data must be processed and responded to in real time. A cloud-only approach can introduce latency and security concerns. Here, deploying a modular edge data centre as a service—fully integrated, quickly installable, and available on a rental or subscription basis—proves highly effective. It eliminates the need for large upfront capital investment and allows businesses to scale, upgrade, or even swap out modules as technology evolves, without disrupting their operations or budgets.
What’s the economic context here?
Economically, this model shifts data centre investments from CapEx to OpEx, offering financial agility. It also supports technology refresh cycles better, enabling enterprises to stay current without heavy reinvestment every few years.
In essence, DCaaS powered by modular infrastructure is a strong fit for modern enterprise needs—it’s scalable, responsive to change, and more economically sustainable than traditional builds. It’s especially attractive for organisations looking to bring compute closer to where data is generated, while retaining the flexibility to evolve as their workloads and technologies do.
Data centres have been getting a lot of criticism for their power-hungry nature, their environmental ripples and outages. Any peaks that you can provide here?
This is a very real and ongoing challenge in the data centre industry. As we move into an era of AI/ML-driven computing, the need for high-density servers and GPU-powered infrastructure is increasing dramatically. These systems are inherently more power-intensive—but they also deliver exponentially more compute in far less physical space. In other words, while individual racks may consume more power, the overall computational efficiency per square foot has significantly improved.
However, the broader reality is that data generation and consumption are growing at an unprecedented pace. Even with efficiency gains, the overall power demand continues to rise simply because of the sheer volume of digital content, applications, and devices coming online every day.
So how do we solve this?
When it comes to addressing the environmental impact, a key lever lies in location and energy sourcing. Traditionally, large data centres have been concentrated in urban hubs like Mumbai, Bengaluru, or Chennai—areas with limited access to renewable energy and already-stressed infrastructure. This creates a clear sustainability imbalance: heavy consumption in areas where clean power is not generated.
To address this, the industry is now actively shifting towards deploying modular and edge data centres closer to renewable energy grids—be it wind, solar, or even emerging hydrogen power sources. By placing data centres strategically near the source of clean energy, we can drastically reduce transmission losses, tap into greener power, and relieve pressure on urban energy infrastructure.
Of course, we must accept that some level of power consumption is unavoidable—especially as we continue to digitise every aspect of our lives. But what we can do is optimise, decentralise, and integrate with green power ecosystems to ensure we are progressing responsibly.
Nuclear infra SMRs, wind power data centres and new AI hardware. What’s their evolution looking like?
While these technologies have been in discussion for quite some time, there’s still a lot of debate around their feasibility and actual readiness for large-scale adoption. Each of them comes with its own set of challenges – technical, regulatory, and operational.
Nuclear infrastructure and SMRs will take more time. It’s not just about whether the technology exists – it’s about whether the ecosystem around it is ready.
The closest to adoption right now are solar and wind energy solutions, which we see gaining traction in the short term. These are more mature, scalable, and easier to integrate into current data centre models, especially when you’re moving infrastructure out of Tier 1 cities into Tier 2 and Tier 3 locations where green power grids are more accessible.
Nuclear infrastructure and SMRs (Small Modular Reactors), on the other hand, will take more time. It’s not just about whether the technology exists–it’s about whether the ecosystem around it is ready. This includes the availability of standardised products, safety protocols, regulatory approvals, and the capability to manage and maintain such systems. These technologies will likely first be tested in non-critical applications like mechanical loads before being considered for data centres, which require high reliability and uptime.
As data centres gradually move away from crowded urban centres to more remote locations, the case for localised, green, and even nuclear power sources becomes stronger. But this shift won’t happen overnight. There are serious considerations around safety, transportation, and long-term risk that still need to be addressed.
Is AI a bubble or a revolutionary opportunity for the data centre market in the long term?
What we’re witnessing today under the umbrella of AI is, in many cases, advanced automation—impressive, no doubt, but still a step away from what true AI promises. The real AI revolution will unfold when machines move from simply responding to commands to making proactive, context-aware decisions. That leap requires one thing above all: vast, diverse, high-quality data—and we’re still building toward that.
To bridge this gap, there’s an aggressive buildout of digital infrastructure underway. IoT devices are being embedded across industries to gather the volume and variety of data needed to train intelligent systems. Just like human intelligence is shaped by years of exposure, experience, and learning from mistakes, machines need extensive datasets across scenarios before they can truly ‘think’. Unlike humans, however, machines can’t afford to make those mistakes—there are legal liabilities, reputational risks, and operational stakes involved. Until that data maturity is reached, AI will largely remain task-specific and narrow in scope.
Hyperscaler rollbacks are not signs of retreat. They are strategic recalibrations.
So, is AI a bubble? Not at all. But we are still at the early stages of a long, demanding transformation. And the most significant progress right now is being made in laying down the digital foundations—scalable, resilient data centres built to handle the compute-intensive needs of AI.
Your take on mega-watt computing, heavy cooling needs, and the risks of over-building?
All the above factors naturally lead to the surge in mega-watt-scale data centres, high-density compute racks, and advanced cooling infrastructure. These aren’t overreactions—they’re responses to real and rising demand. AI workloads, especially model training and inference at scale, consume massive amounts of power and generate significant heat. That’s why we’re seeing a shift to more efficient designs and high-density infrastructure. It’s not over-building; it’s strategic futureproofing.
That said, we must be mindful of scale and relevance. Not every application or geography needs a hyperscale setup. Edge and regional data centres will be vital for real-time processing, localised data collection, and faster response times, especially in industrial and remote settings. This hybrid model—centralised powerhouses supported by edge deployments—is the way forward to build an intelligent, sustainable AI ecosystem.
If the innovation curve can elevate model efficiency and if the next AI chapter is all about inferencing, then is it time for latency-and proximity-oriented, decentralised and edge data centres to shine? Will wild central data centre spending continue?
Absolutely. As AI moves from development to real-world deployment, inferencing is becoming the dominant workload—and that requires data centres to be closer to where data is generated and consumed. Latency, reliability, and contextual responsiveness are key, which makes edge and regional data centres more relevant than ever.
We’re seeing a layered evolution: centralised data centres are scaling up from megawatts to gigawatts, regional centres operate in the megawatt bracket, and edge sites in kilowatts. But while the power and density may be centralised, the intelligence starts at the edge. Real-world data, user inputs, IoT device signals—all originate outside the core, and this decentralised flow will be critical for AI to move from reactive to truly proactive systems.
What about AI capex rollbacks seen from hyperscalers like Microsoft recently?
These are not signs of retreat. They are strategic recalibrations. With the AI landscape evolving so rapidly, organisations must continuously re-evaluate where and how to invest—whether that’s in training infrastructure, inferencing capability, or geographical distribution. It’s less about scaling back and more about shifting focus to meet emerging priorities.
Meanwhile, centralised data centres will continue to grow, especially for compute-intensive training and storage. But the rise of inferencing will drive major investments into edge and regional facilities, enabling faster, localised decision-making and better user experiences. In summary, AI is entering a new phase—where scale meets proximity. Central and edge data centres will grow in parallel, complementing each other. The edge isn’t just gaining relevance—it’s becoming indispensable in the AI value chain.
pratimah@cybermedia.co.in