/dq/media/media_files/2026/02/25/pankaj-malik-2026-02-25-15-46-50.jpg)
Pankaj Malik, CEO and Whole-time Director, Invenia
As the global digital market shifts its focus from general-purpose computing to high-density, GPU-intensive workloads, the physical infrastructure supporting this change is undergoing a fundamental redesign.
This transition represents more than just a hardware upgrade; it is a complete re-engineering of the data centre lifecycle. From managing the “power crunch” with behind-the-meter microgrids to solving the operational complexities of liquid cooling, Pankaj Malik, CEO and Whole-time Director of Invenia, outlines how the company is positioning itself as a strategic partner for enterprises navigating the high-stakes journey from centralised facilities to a core-to-edge, AI-driven architecture.
Compute is increasingly moving beyond centralised facilities to a core-to-edge architecture that de-mands low latency, high availability, and predictable performance.
How is the organisation evolving its service portfolio to address the shift from traditional enterprise data centres to high-density, AI-driven architectures?
Invenia’s service portfolio is rooted in deep expertise across enterprise and on-premises data centres, while being purpose-built to support the shift towards distributed, AI-ready digital infrastructure. As AI and real-time workloads scale, compute is increasingly moving beyond centralised facilities to a core-to-edge architecture that demands low latency, high availability, and predictable performance.
Our portfolio reflects this evolution by integrating scalable data centre design, high-capacity fibre and network infrastructure, and automation-led operations to support high-density, GPU-intensive environments. In parallel, Invenia continues to expand its managed services, cybersecurity, and digital infrastructure capabilities to address end-to-end lifecycle requirements across data centres, networks, cloud, and edge environments.
Through SLA-driven managed infrastructure, 24x7 monitoring and operations, and adaptive cyber-security, we enable customers to deploy AI and data-driven workloads closer to the edge without compromising governance, resilience, or long-term scalability.
With grid congestion becoming a primary inhibitor to data centre growth globally, what is your perspective on the rise of ‘behind-the-meter’ power solutions?
Behind-the-meter power solutions are rapidly evolving from a contingency measure into a strategic pillar of data centre infrastructure worldwide. As grid constraints intensify across major markets, hyperscalers and large data centre operators are increasingly deploying on-site generation, energy storage, and microgrids to reduce exposure to grid volatility and connection delays.
Globally, this has translated into greater adoption of on-site solar, wind, and battery systems, often deployed in hybrid configurations that allow facilities to dynamically manage grid draw during peak demand or stress events. In markets such as Europe and parts of North America, this trend is being reinforced by grid operators introducing flexible connection agreements and time-of-use models designed to maximise existing grid capacity rather than relying solely on long-cycle transmission upgrades.
Behind-the-metre power solutions are rapidly evolving from a contingency measure into a strategic pillar of data centre infrastructure worldwide.
On-site generation and storage allow data centres to operate effectively within these evolving grid frameworks, maintaining predictable performance, improving resilience, and preserving the flexibility to scale as compute, AI, and high-density workloads continue to grow.
Beyond the hardware itself, what are the most significant operational challenges that must be solved to make liquid cooling truly mainstream?
The transition to liquid cooling is less constrained by technology and more defined by operational maturity at scale. The real challenges sit across three areas: systems integration, skills, and lifecycle management.
First, coolant management and reliability demand a fundamentally different operating discipline. Liquid cooling introduces new dependencies around fluid quality, leak detection, redundancy design, and long-term materials compatibility. These systems must be engineered with carrier-grade resiliency.
Second, the operational challenge is not just skills, but control. Liquid-assisted and direct-to-chip cooling significantly increase system interdependencies between IT load, cooling loops, and power infrastructure. As densities rise, maintaining stability depends on deeper instrumentation, real-time telemetry, and automated control systems that can dynamically balance thermal performance, energy efficiency, and risk, reducing reliance on manual intervention.
Third, retrofitting legacy facilities remains complex. Many existing sites were not designed for high rack densities, plumbing distribution, or heat-reuse architectures. This is where modular, hybrid cooling designs become critical. They allow operators to incrementally evolve from air-cooled environments to liquid-assisted and direct-to-chip solutions without disrupting live operations or stranding capital.
At Invenia, we address these challenges through a unified, hybrid cooling strategy. Air cooling continues to serve standard deployments reliably, while liquid-assisted and direct-to-chip solutions are integrated where workload density demands it. This approach preserves operational stability today while ensuring our infrastructure is ready for the next generation of AI-driven compute.
punams@cybermedia.co.in
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us