/dq/media/media_files/2026/02/17/vishal-dhupar-nvidia-2026-02-17-21-10-41.jpg)
Vishal Dhupar, MD, South Asia, NVIDIA
At NVIDIA’s India AI Impact Summit 2026 news pre-brief, Vishal Dhupar, MD, NVIDIA South Asia, summed up key announcements and reflected on how quickly India is moving from AI adoption to AI-building.
NVIDIA, he said, is hosting a senior delegation led by Executive Vice President Jay Puri to “celebrate our exceptional researchers, startups and developers who are building the nation’s AI infrastructure”.
Across February 16–20 at Bharat Mandapam, New Delhi, the company is hosting almost 15 sessions “covering everything from open models to agentic and physical AI”, alongside “100+ NVIDIA partners” showcasing work on AI infrastructure and open-source models.
Coming to the pre-brief, the larger point, Dhupar said, is that India now has a shot at using AI to accelerate outcomes across healthcare, education, industry, and digital public infrastructure. But to do that, the country needs to treat AI less like a product launch and more like a foundational system.
“AI is not just a single product or a lone breakthrough. It’s an entire industrial system,” he said, adding that AI is becoming “essential infrastructure, just as electricity and the Internet - as in previous generations”.
The “Five-Layer cake” that defines the AI stack
To make the point tangible, Dhupar used a simple model: “Think of AI as a five-layer cake.”
He laid out the stack from the ground up:
- Energy
- Chips
- Infrastructure
- Models
- Applications
Each layer, he said, has its own ecosystem, and NVIDIA is “working with India’s technology leaders at every single level of this stack”.
That line up also set up the tone for the rest of the briefing: India’s developer base, the build-out of AI cloud capacity, the push on open models and toolkits such as NVIDIA Nemotron, and the expansion of “physical AI” into manufacturing.
India’s developer engine, startups, and model builders
Dhupar pointed to the scale of the India ecosystem as a key differentiator.
He said there are “roughly 800,000 developers in India” building, training, and deploying AI on NVIDIA’s platform, alongside “over 4,000” companies in NVIDIA’s Inception programme in India. These firms, he said, are applying AI across healthcare, finance, retail, and industry.
He also said “12 dedicated model builders” are working on foundational and domain-specific AI “designed to be scaled across the country and beyond”.
This, he added, is backed by NVIDIA teams in India working “across five campuses” spanning engineering, research, customer support, and developer relations.
Together, he argued, this makes India “not just a major market for AI, but in fact a global centre for AI innovation and impact”.
AI clouds and “AI Factories” built on Blackwell-scale capacity
On the infrastructure layer, the briefing highlighted what Dhupar described as the India AI Compute Builder effort: building AI clouds with systems that include “tens of thousands of NVIDIA GPUs”.
He said NVIDIA is collaborating with cloud providers such as Yotta, L&T, and E2E to deliver “advanced AI factories” intended to support India’s researchers, startups, and enterprises.
Among the announcements he referenced:
- Yotta “augmenting its Shakti Cloud” powered by “over 20,000 NVIDIA Blackwell Ultra GPUs”, positioned as large-scale sovereign AI infrastructure.
- L&T “announcing a new formation to build sovereign AI factory infrastructure”, with a roadmap that includes “expansion in Chennai and [a] new facility in Mumbai”.
- E2E “building an NVIDIA Blackwell cluster” hosted at an L&T data centre.
He also addressed a question on how India’s GPU mix is evolving, noting that India started building clusters on Ampere and Hopper and is now seeing augmentation with Blackwell, with different architectures serving different workload needs.
Open models and Nemotron as a “toolkit”, not just a model
Models, Dhupar said, are the next essential ingredient in the stack.
He positioned NVIDIA’s open models as a springboard for sovereign AI development in India and claimed that in 2025 NVIDIA was a top contributor in open-source models, data, and training recipes, with “even greater focus and momentum” continuing this year.
He also stressed that the open-model push spans domains beyond large language models, including biomedical AI, climate and health systems, agentic AI, robotics, and autonomous systems. The goal, as he put it, is to help developers build systems that reflect “their own language, tradition, and culture”.
He then “double clicked” into NVIDIA Nemotron, describing it as “not a model, it’s an entire toolkit”, combining datasets, libraries, and recipes for building agentic AI.
For India, he highlighted “NVIDIA Nemotron personas”, described as a massive open synthetic dataset designed to reflect India’s diversity. Because it is built on synthetic patterns rather than personal data, he said, it can help create culturally relevant AI while protecting privacy.
He also noted a recent release: “NVIDIA Nemotron 3 Nano”, and said larger versions “Super and Ultra” are expected soon.
Indian startups and enterprises building on Nemotron
The briefing called out several India-based AI model builders and application companies using Nemotron tools and libraries.
Among the model builders highlighted:
- Sarvam, BharatGen, and Chariot, described as among the first in India to build frontier language and multimodal models with Nemotron libraries and datasets.
- Sarvam, he said, is open-sourcing its “Sarvam 3 Series” of text and multimodal language model variants.
- BharatGen is releasing mixture-of-experts language models.
- Chariot is announcing an “88 billion parameter text to speech model” optimised for India’s linguistic landscape.
He also referenced Nemotron tooling such as Nemo Curator (data processing), Nemo Framework (pretraining), and Nemo reinforcement learning (post-training), and said these models are intended for public sector and enterprise applications.
On the applications side, he cited:
- Commotion, described as developing an AI operating system for complex enterprise workflows, integrating Nemotron models and speech capabilities.
- Gnani, which he said is achieving a “15X reduction” in language and voice intelligence, with production systems built using the Nemo framework and Nemotron speech models.
- NPCI, described as customising the Nemotron 3 Nano model for finance-specific tasks to support multilingual customer service across India’s banking ecosystem.
- Zoho, described as advancing its CLM platform and building proprietary models using NVIDIA NeMo on Blackwell and Hopper, integrating them across its software suite.
Global system integrators: India as the delivery engine for enterprise AI
Dhupar argued that India’s systems integrators will remain central to how enterprise AI gets built and scaled globally.
He said India’s tech industry is “on track to generate $350 billion by 2030”, powered by a global delivery model where “roughly 80% of the revenue” comes from international customers.
He cited Infosys, Persistent, Tech Mahindra, and Wipro as using NVIDIA AI Enterprise to build and deploy AI agents across industries including finance, software development, drug discovery, telecom, and customer service.
One example he shared from a US healthcare insurer: an AI-driven voice and agent-assist solution where AI agents handled “42% of the inbound calls”, responding to “900 concurrent calls” and “164 requests per second” with “sub 200 millisecond latency”.
His broader claim: this is “one snapshot” of how India-based GSIs, powered by NVIDIA technologies, are pushing enterprise AI adoption at global scale.
Physical AI and the next phase of industrialisation
The top layer of applications, Dhupar said, is increasingly physical: factories, robots, simulations, and digital twins.
He linked this to India’s investment cycle, saying the country is investing “$134 billion” in manufacturing capacity across construction, automotive, renewable energy, and robotics.
He said NVIDIA, along with Cadence, Siemens, and Synopsys, is teaming with Indian manufacturers to build “AI factories of the future”, described as software-defined facilities designed to integrate AI from initial design to final operation.
He cited examples including:
- Havells using simulation to achieve “six times faster” results for appliances, enabling better energy-efficiency optimisation.
- Addverb Technologies using NVIDIA Omniverse for high-fidelity digital twins to test robots in simulation before deployment.
- TCS deploying physical AI at Tata Motors for quality inspection and real-time safety monitoring.
- Wipro using the Isaac robotics platform to stress-test workflows in a virtual world.
- Tata Consulting Engineers launching “cognitive twin platforms” for players including Torrent Power and Power Grid Corporation of India.
The pitch, he said, is that combining NVIDIA’s stack with India’s manufacturing leadership means “we aren’t just witnessing the next age of industrialisation, we are building it”.
Q&A: Talent, cost of intelligence, and “the paradox of efficiency”
In the Q&A, Dhupar repeatedly returned to a few themes: grassroots research, diversity, energy efficiency, and job augmentation.
On building capability at the roots, he said NVIDIA’s India office works closely with universities and leading institutes, including IITs, and noted that research can translate into companies, citing “the work that we have done at IIT, Chennai and the birth of Cerbone”.
As the Q&A turned to the hard part of scale, Dataquest asked how India can balance rapid AI adoption with ethical frameworks and workforce readiness, given the country’s linguistic and social diversity. Dhupar said India’s diversity is an advantage, and that an “Indian stack” can create population-scale solutions for education, healthcare, and mobility. If India solves these at scale, he said, it can become a reference architecture for the Global South and a “net exporter”.
On the economics of AI, he positioned accelerated computing as the lever to reduce cost and energy use, claiming that moving from Hopper to Blackwell brings “energy efficiencies by 25 times” and “performance by 30 times”, which “brings the cost of intelligence down dramatically”.
And on jobs, he pushed back on the idea that automation simply replaces people. “It’s the change of the task, not the purpose,” he said, citing Jensen’s radiology example: automation frees specialists to focus on complex cases even as demand rises. He called it the “paradox of efficiency”, and argued India will use that paradox to “augment” jobs and create “more good jobs”.
AI is a "Stack", not a product
Dhupar ended where he began: AI is a stack, not a point solution.
“AI is a five-layer cake spanning energy, chips, infrastructure, models and applications,” he said, adding that NVIDIA is working with India’s “visionaries at every single level of this stack”.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us