/dq/media/media_files/2025/12/01/esds-unveils-its-gpu-as-a-service-platform-2025-12-01-10-14-58.png)
ESDS Software Solution launched its GPU as a Service platform at the company’s 20th Annual Day celebration, held at Sula Vineyards in Nashik. The announcement, delivered by Promoter, Managing Director and Chairman Piyush Prakashchandra Somani, signifies the company's foray into sovereign-grade GPU compute environments aimed at AI and ML workloads.
The launch brings ESDS into the global league of managed GPU service providers at a time when the demand for deterministic, high-throughput infrastructure is surging—driven by the exponential rise in Generative AI, Large Language Models (LLMs), and simulation workloads.
“We are democratising access to large-scale GPU clusters,” said Somani. “Organisations have struggled with the cost and ambiguity of AI infrastructure. Our SuperPODs remove the guesswork, offering transparency and consistent performance.”
Built for performance, optimised for clarity
At the core of this offering are next-generation GPU SuperPODs powered by hardware from NVIDIA (DGX, HGX B200/B300, GB200, and NVL72) and AMD (MI300X). These clusters are purpose-built for enterprises needing scalable environments to run training, inference, and simulation workloads across billions of parameters.
Notably, ESDS has introduced the *SuperPOD Configurator*, a browser-based tool that allows organisations to custom-design their AI infrastructure. By selecting GPU models, compute densities, and storage options, users get real-time visibility into performance projections and cost estimates.
This self-service tool aims to remove procurement opacity, historically a roadblock for teams trying to plan or justify large AI investments.
Performance proof, not just promise
The company shared a compelling use case from a research lab previously crippled by disjointed infrastructure. Training a 50-billion-parameter model once took 40 days and incurred inflated operational costs. After moving to ESDS’s NVL72-based managed GPU stack with optimised containers and NVLink bandwidth, the lab slashed training time to 10 days, cut costs by 60%, and achieved 30× faster inference speeds.
This anecdote isn't isolated, it reflects the broader market need for infrastructure that can keep up with the scale and pace of modern AI development.
India-built, globally aligned
Though the infrastructure meets global AI performance benchmarks, ESDS has designed, built, and optimised the systems entirely in India. Serving over 1,300 clients across BFSI, government, and enterprise verticals, the company now offers a full-spectrum GPU services suite, from captive design consulting to managed deployment and hybrid GPU+CPU cloud configurations.
The company also offers 24×7 monitoring, AI/ML ops services, and flexible consumption models, elements crucial for organisations that may lack in-house infrastructure teams or GPU expertise.
Market context and regulatory note
With global AI-optimised infrastructure spend projected to hit USD 329.5 billion by 2026, and nearly 80% of AI investments funneled toward GPU-based systems by 2030, the timing of this launch positions ESDS to ride the crest of the AI infrastructure wave.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us