/dq/media/media_files/2025/09/15/vara-kumar-2025-09-15-15-59-21.png)
Vara Kumar Namburu, Co-founder & Head of R&D; and Solutions, Whatfix
Whatfix has unveiled three role-aware AI agents, Authoring, Insights, and Guidance, designed to streamline enterprise workflows and boost efficiency. In this exclusive conversation with Vara Kumar Namburu, Co-founder & Head of R&D and Solutions, Whatfix, we explore the pain points these agents address, the technology powering them, and the company’s vision of achieving “zero-click outcomes” for enterprise users.
Can you start by briefly telling me about the announcement?
We are a company focused on helping enterprises improve user efficiency. Today, we have around 85 Fortune 500 customers and serve organisations across every major vertical—technology, BFSI, pharmaceuticals, manufacturing, and logistics. Some of the largest global enterprises use our solutions.
The broad theme of our products is to help users work more efficiently with technology. Currently, we offer three core products:
Digital Adoption Platform (DAP): Helps users adapt to any technology with ease.
Product Analytics: Enables product managers and owners to understand how their software is being used, where users drop off, which features are adopted, and how to improve engagement.
Mirror: Provides a simulation environment for practice, useful for roles like call center agents who need to rehearse before live customer interactions.
These solutions together help enterprises drive productivity and adoption across technology ecosystems.
Whatfix is launching a “role-aware AI agent.” Can you explain what this means and how it aligns with your current products and long-term vision?
Yes, we’re launching three AI agents, each designed with a specific role and use case in mind.
Authoring Agent: Designed to help business users quickly create Whatfix experiences. Today, building these experiences can take hours or days. The authoring agent reduces this time significantly. For example, when a seller is creating a quote, Whatfix can intelligently surface relevant details—like customer industry, competitors, and best discount ranges—so the seller can make the right decision. The authoring agent helps teams set up such experiences much faster.
Insights Agent: This ties into our product analytics solution. Traditionally, users had to rely on data analysts and wait days for reports. With the insights agent, users can query data in natural language and receive answers instantly, along with recommendations for next steps.
Guidance Agent: Built for end users such as sellers, adjusters, or employees navigating workflows. It delivers contextual guidance in real time, replacing the need to sift through long documents. For example, if a user asks who needs to apply for a particular process, the agent gives a direct answer along with the source reference.
Each agent is role-aware, meaning it is designed for a specific persona and function, which makes the experience more intuitive and impactful.
What were the frictions or pain points in the user journey that these agents are designed to solve, and how are they better than your previous solutions?
Let’s look at each one:
Authoring Agent: We’ve seen users become 30% more efficient in creating experiences with this tool. Our roadmap aims to increase that efficiency even further.
Insights Agent: In the past, getting data took days. Now, users receive actionable insights within minutes, saving significant time for product managers, BAs, and owners.
Guidance Agent: Earlier, employees had to comb through long manuals. Today, they get direct, contextual answers, reducing the time to find information dramatically.
Overall, the key value is speed, accuracy, and reduction in manual effort across all roles.
What core technologies and models are powering these agents?
We use a variety of models, but predominantly Azure OpenAI large language models (LLMs). These are coupled with retrieval-augmented generation (RAG)-style systems. In areas requiring very high performance and scale, we also build our own models. The LLMs are layered with enterprise-specific data from Whatfix, making them tailored for customer needs.
How do you plan to integrate these AI agents directly into existing workflows without disrupting user experience?
At Whatfix, we always co-build products with our customers. These agents are fully integrated into the existing workflows rather than being a separate copilot where users must switch contexts. This ensures a seamless experience.
We are gradually rolling them out across our customer base. These agents are also offered as add-on products, and we anticipate they will drive around 25% of our revenue in the near future.
Agentic AI is still a very new concept. What technical challenges and outcomes did you face during R&D that were new for your team?
One of the biggest challenges with LLMs is moving from demonstration to reality. A demo often looks impressive, but making the solution work consistently across customer use cases requires a lot of effort.
Unlike traditional software development, building AI-led solutions requires a different approach to evaluation, testing, and security. The answers from LLMs are not always predictable, so our R&D had to adapt with new evaluation frameworks and safeguards.
Over the years, we have integrated AI into various parts of Whatfix, but with these agents, AI now underpins much of our product development process. This required significant internal learning on how to build with AI and the tooling around it.
How do these AI agents interact with enterprise applications and data sources?
We already had strong integrations with enterprise systems, and we’ve extended these into the AI agents. For example, the guidance agent integrates deeply with customer knowledge repositories—whether that’s Confluence, Google Docs, Salesforce Knowledge, or others.
These integrations ensure the agents surface enterprise-specific insights rather than generic responses. In fact, the demand for integrations is increasing because customers see greater value when the agents are aligned closely with their own enterprise data.
Looking ahead, how do you see these AI agents evolving over the next 12 to 24 months? What new capabilities are you aiming to add?
Our vision is centred around what we call “zero-click outcomes.” The idea is to move towards workflows that require little to no user effort.
Today, we are automating tasks that typically take 30 minutes and reducing them to a few clicks. In the future, we want to scale this to higher-order tasks that might take an entire day and bring them down to an hour or less.
So, the framework is to progressively expand the scope of automation across user workflows and make enterprise processes increasingly frictionless.