/dq/media/media_files/2026/02/23/babak-cognizant-2026-02-23-15-04-51.png)
India’s AI discourse is entering a decisive phase, shifting from exploratory conversations around generative tools to a more structural push toward sovereign infrastructure, agentic systems, and decision-centric intelligence. With initiatives such as national compute capacity, public-private collaboration, and a growing focus on inclusive innovation, the country is attempting to translate ambition into deployable ecosystems. Against this backdrop, industry leaders are increasingly being called upon to articulate how global experience can intersect with India’s unique scale, diversity, and policy momentum.
In this conversation, Babak Hodjat, Chief AI Officer at Cognizant, shares a detailed perspective on the transition toward multi-agent enterprise architectures, the strategic importance of sovereign compute, and the role of distributed and multilingual AI in a country like India. Drawing on his work across agentic systems, evolutionary algorithms, and responsible AI initiatives, he outlines why infrastructure, design principles, and global collaboration will be critical to turning India’s AI aspirations into measurable impact.
With the shift from conversational AI to decision-centric AI, how will enterprise user experience evolve?
It will fundamentally change because the unit of software used to be software modules or apps, and now it is going to be agents working together. That goes beyond just a chatbot. You actually have agents working together and running semi-autonomously based on your enterprise.
We believe that is the future. Driving that requires processing capacity. We welcome the fact that India is investing in processing capacity. When I was here in November, someone asked what the biggest thing that India can do to get ahead is, and I said publicly available or at least academically available processing capacity is something that everybody is complaining about. Hopefully, in the path to creating your sovereign AI, that is something you can invest in and allow universities, students, and smaller companies to access.
Can distributed AI help India build inclusive and localised solutions despite fragmented datasets?
One of the things that agentic systems do is they allow for a diversity of data sources because you can actually have an agent responsible for a data source talking to other agents responsible for other data sources. Your interface into this system could be a consolidation of information and decisions that come from these disparate sources.
It is the first time that we can actually have a mapping between intent and disparate sources of data and applications. I think that will work well. That kind of design can work well in a country like India with such diversity of data.
That does not happen without good design. We need to remember that this is not going to happen organically. We have to follow an agentic design principle to build a system like that.
How can foundational AI resources be democratised for startups and MSMEs?
We do this through partnerships, especially with academia. When we bring an intern on board, part of the appeal of joining Cognizant is access to processing capacity that we have in our own data centres or through partnerships with companies like NVIDIA.
That will not satisfy the actual need in a country of the size and scope of India. This will need government backing and industry-government collaboration in building these systems.
We have a foundry that is a model that starts from the actual processing capacity being hosted all the way up to multi-agentic systems and is fully open source. This model fits very well for the sovereign AI model and the chakra that the summit outlines. We are willing to help the Indian government build this, and we can put in our bit to make this happen.
How can India move from conversations to demonstrable impact in the AI ecosystem?
This is a very good starting point. The summit itself, attracting many people, not just from India but from around the world, people who are major players in AI and infrastructure and cloud coming here, is a very good first step. It is the germination for an ecosystem that, with support from the government, can build or fulfil the requirements that you are looking at.
It takes investment. It takes not just processing capacity but also building a sandbox ecosystem for different players in a safe way to be able to install and deploy AI systems and agentic systems. That is also needed.
It is welcome that the Indian government is behind this because it is only that consolidated will and investment that will bring this together and help the country get ahead.
How is Cognizant collaborating globally to foster talent and research?
Here, talking about AI for Good. That is one of the things that every time I talk, I put up a QR code and I say volunteer. We are volunteering. Why not join the fun? We get a lot of volunteers coming in.
That project itself is in conjunction with the United Nations, ITU. We are not doing this in isolation. We are doing this with international bodies like the ITU for things such as multi-agent communication and trust standards.
We are also working with standards bodies like the ITU. We work with the World Economic Forum on a lot of these things. Those are conduits into building intergovernmental AI for Good projects.
We are investing in our AI for Good. There will be some interesting news around it in 2026. Hopefully, there will be more on that.
How can genetic algorithms address India-specific challenges?
Genetic algorithms and population-based evolutionary computation are a long-neglected but very powerful area of AI. We have been writing papers on how bringing that together with large language models can make a huge difference and address some of the major problems that large language models have.
Large language models primarily operate on gradient descent kind of algorithms that are hill climbing, so they have some fundamental limitations.
Population-based approaches like genetic algorithms are very good at non-linear optimisation, especially if you are looking at multiple outcomes at the same time. Pretty much every problem that we look at is multi-objective. Every problem that we look at has improved revenue but reduced costs. You look at curing disease but reduce impact on the economy. It is always more than one outcome that we are looking at.
In problems like optimisation of power grids or managing urban traffic systems, these are very well-suited algorithms. An approach that we have used in the past creates models on historical data, time series data that has been collected, and then uses evolutionary computational genetic algorithms to come up with decisions.
This is called prescriptive analytics. You are actually trying out various actions and decisions against a model of your time series data. It is a very effective approach.
What is your perspective on India's AI infrastructure investments? What should the government focus on application of AI or the development of AI?
I do not think it is either or. A lot of applications will require that kind of in-house processing capacity. I welcome this. It is something that is missing.
In the US, you talk to academia, and everyone is complaining about the lack of access to processing capacity. It is welcome that India is making this kind of investment. It will bear fruit.
Building applications makes sense, but India has this DNA of being technology-first. Technology is a career track that is very appealing to many in India. With the IITs and strong companies, you will thrive in the application area as well.
Should nations build foundational models or localised layers?
Experience has shown that while it might make more sense to do the second one, experience in China has shown that a localised model can bring something to the table that more generic models that do not have access to the same diversity of data cannot.
It is not an either-or. Building a large language model on localised data does bring something. If you look at DeepSeek and how, sometimes in its chain-of-thought reasoning, it uses both English and Chinese, it is fascinating. I believe that multilinguality at the core is helping it with reasoning steps.
India is sitting on massive cultural and linguistic diversity, and building an LLM on that will shine an interesting light on how these systems operate when trained on data that is not only English.
How do you ensure audit consistency across multiple languages?
You tell me which one is safer to do: use a large language model trained only on English that does translation from local dialects, or a language model natively trained on many languages. It is the latter that is going to have more consistency. We cannot rule out mistakes, but it will be more consistent.
Different languages are a reflection of cultural, emotional, value-based, and other human abstractions that might not align fully with those of other languages. The enrichment of the abstraction layer of these large language models is going to help them.
The reason a large language model is powerful is that it is abstracting the world similarly to how humans abstract the world. Humans do not all abstract the world in the same way. If you have a diversity of abstractions captured, you have a richer and more powerful system.
What is Green AI, and how can it reduce energy consumption?
At Cognizant, we have built a measure of energy consumption, cost, and time into our multi-agent accelerator. When you use it, it has a running tally of the energy impact. It even looks at the source model and whether the energy source is a fossil fuel or another source.
There are two opposing forces when it comes to AI. Scaling laws mean that building bigger is more powerful, and building bigger typically means using more energy. Many companies are looking at green sources for that additional consumption.
On the other hand, companies are optimising models to be smaller and less energy-hungry. For multi-agent systems, smaller models can be more cost-effective and greener.
We ran a million-agent system on a distilled small model and still got zero errors. Running it on a larger model would have required fewer agents, but the energy impact would be much larger.
Often, using smaller agents makes more sense from an energy perspective than using fewer larger agents.
Will AI help achieve net-zero goals?
I hope so. We are scratching the surface of using AI. The more people use it, the more they will want to use it.
If usage remained constant, we would hit net zero because models become more efficient and energy becomes greener. The reality is that as cost goes down, usage increases.
It is a tough problem. I do not know where this will lead.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us