Agentification is not just about productivity — It’s about creating new business models

Cognizant’s Annadurai Elango explains how enterprise AI is shifting from productivity to agent-driven transformation, highlighting infrastructure, security, CoEs, and the importance of long-term cost economics for scaling AI.

author-image
Punam Singh
New Update
Annadurai Elango, President – Core Technologies and Insights at Cognizant

Annadurai Elango, President – Core Technologies and Insights at Cognizant

Listen to this article
0.75x1x1.5x
00:00/ 00:00

As enterprises move from experimenting with AI to embedding it into core operations, the conversation is shifting from pilot programs to production-scale infrastructure, governance, and economics. Partnerships between global system integrators and hyperscalers are becoming critical in translating rapid advances in foundation models into tangible business outcomes. Against this backdrop, organisations are rethinking how platforms, data architectures, and security frameworks must evolve to support agentic workflows, while also addressing cost, sovereignty, and legacy modernisation challenges.

Advertisment

In this interview, Annadurai Elango, President – Core Technologies and Insights at Cognizant, outlines how the company is operationalising enterprise AI through its expanded collaboration with Google Cloud. He discusses the shift from productivity-led automation to agent-driven business transformation, the role of GPU-centric infrastructure and open platforms, and why long-term economics, trust frameworks, and reusable IP built from global Centers of Excellence are becoming central to scaling AI responsibly.

Cognizant recently announced an expansion with Google Cloud to operationalise AI. How does this shift from assisting to collaborating change the core infrastructure requirement for your clients?

We have a very long-standing partnership with Google. We are one of the largest partners to customers, and we also use many Google technologies internally. What we recently announced was our co-solutioning, co-engineering, and co-implementation services around Gemini Enterprise.

Advertisment

This is primarily about taking AI models and agent capabilities to end customers in a democratised way, making them available to enterprise users not only in the backend but also in the frontend. That is why Gemini Enterprise is a significant step. We also partner with Google on user-facing Workspace, and Google is now bundling Workspace and Gemini Enterprise together for customers.

We intend to integrate Gemini Enterprise with all our Neuro series platforms. When we take a platform-based solution to customers, we will be able to include Gemini Enterprise as part of that offering. This makes it easier for customers to build their enterprise agentification journey.

Google has already built the underlying middle layer, data layer, and infrastructure, which means customers do not need to invest in building the entire stack. They can directly focus on the top layer—reimagining business processes and applying Gemini Enterprise.

For us, this is about speed to value. We can integrate solutions quickly with our platforms, so customers do not have to spend time building, standardising, and auditing infrastructure. That acceleration is the key benefit.

You spoke about the mirage of digital sovereignty and execution hurdles in scaling AI infrastructure. For Indian enterprises with legacy systems, how does your platform bridge the gap?

Legacy infrastructure has a different connotation today. Earlier, the shift was from mainframes to open systems and the cloud. Now, every enterprise is undergoing a major transformation regardless of whether they were already modernised or still on older technologies.

The shift toward GPU infrastructure is central because a large portion of AI workloads and models run there. At the same time, enterprises still depend on ERP and SaaS systems, and the challenge is bringing data closer to where the models run. Data gravity and latency become critical because moving AI to data is harder than moving data to AI.

Organisations now have multiple architectural choices. They can run workloads on hyperscaler clouds where AI infrastructure is part of the stack, deploy GPU environments alongside hyperscalers, or bring GPU infrastructure inside their own data centres for security and regulatory reasons.

Our platforms are agnostic to where data or workloads reside. They operate on open API models, enabling us to encapsulate systems, build wrappers, and access data regardless of infrastructure. Applications are becoming less relevant, while data access is becoming the most critical factor. Building wrappers around applications is relatively straightforward compared to ensuring data accessibility.

With agentic productivity becoming a major theme, how do you see the role of humans evolving?

The evolution over the last three years has been rapid. Initially, AI discussions focused on productivity, especially IT productivity. Over the past year, the conversation has shifted toward reimagining and innovating business processes.

Productivity remains embedded in enterprise operations and will continue to be a priority. Partnerships and platforms help drive this, but the focus is expanding beyond efficiency to value creation.

Agentification enables organisations to redesign workflows and create new business models. For example, in healthcare, instead of only automating administrative IT systems, agentified processes can handle complex roles such as pre-authorisation workflows.

This shift is about speed and value, not just productivity. It changes how organisations deliver outcomes and build services.

Tell us about the new Centre of Excellence in India and its role in global product strategy.

Centres of Excellence are core to building capabilities and capturing insights from customer engagements. A significant portion of our workforce is based in India, and these centres track emerging technologies, upskill talent, and build engineering solutions.

Their role has expanded beyond skills to creating platforms and intellectual property. We centralise learning from global engagements and embed those learnings into platforms, ensuring new customers benefit from reusable assets.

We are doubling down on CoEs because they now build both capabilities and IP. For example, with Gemini Enterprise, CoEs build competencies, reusable agents, and industry solutions, which are stored in a central repository for future use.

From a cybersecurity perspective, how is your platform evolving to address agent-to-agent security?

We approach this in two ways: security for AI and AI for security. We continue delivering traditional security services such as identity management, vulnerability management, and operations, while also embedding AI into security delivery.

As a full-stack AI player, we focus on securing the entire lifecycle—from data to models to agents. Governance, privacy, bias management, and responsible AI are key considerations.

We developed the Neuro AI Trust framework with tools and processes to enforce guardrails. We also collaborate with ecosystem partners to integrate capabilities rather than building every component ourselves.

Security must be embedded from the foundation. Guardrails cannot be added after deployment. Tools and frameworks help manage agent orchestration, safety, and lifecycle governance to ensure secure scaling of AI systems.

Enterprises often struggle with the operational expenditure of AI. How do you view the economics?

There is a cost to agentification and an ongoing cost to run AI. Discussions often focus on implementation rather than operational economics.

If the cost of running AI exceeds the cost of the human workforce it replaces, the value proposition becomes weak. That is why our advisory approach emphasises long-term economics, including infrastructure choices and cost-per-token considerations.

The conversation is shifting from simply deploying AI to ensuring it remains economically viable at scale. Commercial modelling and cost optimisation are becoming critical parts of an AI strategy.