Why Enterprise AI won’t be Plug-and-Play

Ramprakash Ramamoorthy, Director of AI Research at Zoho, explains why real-world AI adoption demands more than APIs and flashy demos—and why Zoho is betting on contextual, private, ground-up intelligence.

author-image
Aanchal Ghatak
New Update
Ramprakash-Ramamoorthy-interview
Listen to this article
0.75x1x1.5x
00:00/ 00:00

From training its own LLMs on $20M worth of in-house infrastructure to rethinking AI privacy at the data layer, Zoho is proving that AI for enterprises isn’t about size—it’s about fit, context, and control. Ramprakash Ramamoorthy, Director of AI Research at Zoho, explains why real-world AI adoption demands more than APIs and flashy demos—and why Zoho is betting on contextual, private, ground-up intelligence.

Advertisment

As organizations rush for rapid AI implementation, many world-views are burrowing in the foundational complexity underneath the surface. “AI is not magic—it is plumbing,” observes Ramprakash Ramamoorthy. Zoho’s strategy is fundamentally different; they have invested over $20 million building foundational infrastructure-256 NVIDIA GPUs-to construct their own Zia LLM models, completely deployed on-premise for enterprise workflows, instead of relying on third-party APIs, or fine-tuned consumer models. While the LLMs vary from 1.3B to 7B and were constructed from scratch, each model is designed to create privacy-first, scalable value across an ecosystem of 55+ apps.

What are some of the biggest challenges businesses face while deploying AI—and how are you helping them navigate these?

The first, and possibly the biggest challenge is expectation. Many organizations very much view AI as plug-and-play, expecting instantaneous change. When in reality, the complexity of the domain is far deeper.

Advertisment

Secondly, organizations often operate in fragmented data channels and have siloed partial processes—this makes deploying AI more difficult. Even at Zoho, we have dealt with this. As an example, when we explored running automation for legal and product support, we found out that the “final miles plumbing” took quite a while – despite owning the entire platform. That says a lot.

A key challenge is unrealistic expectations—AI isn’t plug-and-play. Fragmented data and siloed processes worsen the issue. Even Zoho faced delays automating legal and product support, despite owning the full stack. That says everything.

And thirdly, the lack of integration between internal systems creates friction. A really obvious example:

Advertisment

I raise a travel request to attend a press event, which gets approved. I then, separate from the travel request, need to submit an on-duty request that also requires approval for the same trip. Those are disconnected processes. This friction can severely limit the effectiveness of AI. When systems aren’t able to talk to each other, the AI will only be able to achieve a small fraction of its potential.

Are there any underrated challenges that companies often overlook while adopting AI?

Absolutely. Many organizations believe that AI is just a magic button that can easily be plugged into an ecosystem in an API that often costs $20 a month. But AI does not work like that; it must grow from a number of foundational components, especially in the enterprise stack.

Advertisment

Another neglected principle is search. It’s a foundational function, one that brings context and guardrails. For example, you can’t just ask an LLM to summarize an inbox with no access boundaries. That would be a huge privacy issue.

How does Zoho’s approach to AI differ from other tech players in the market?

The difference for us is really in our platform. Zoho offers integrated applications that exceed 55+, and now we have built an AI layer on top of all the first-party data. That makes this very powerful.

Advertisment

We also train our own large language models all on-premise utilizing NVIDIA hardware, and never with public clouds. From June 2024 to June 2025, we have invested around $20 million in compute infrastructure to enable this. It’s a lot, but we see AI as foundational, not a nice to have.

How did cost constraints influence your AI infrastructure decision?

We made a deliberate choice. We’ve been investing in AI since 2012. When the ChatGPT moment happened in 2022, we quickly integrated it for customers, but we also doubled down on building our own models. We didn’t want to miss out on what we believe is a pivotal technology. That required a long-term, capital-intensive commitment—but it’s worth it.

Reasoning and generation remain challenges for smaller models. How are you addressing that?

Advertisment

That’s true—larger models still reason better. That’s why we’re investing in mixture-of-experts architecture and also exploring models with up to 100 billion parameters. But in enterprise scenarios, you often don’t need massive models. Around 80% of use cases can be solved with models under 7 billion parameters, especially when the task is contextual—like summarizing CRM notes or drafting internal emails.

Zoho’s AI implementation heavily leans on privacy and contextual intelligence. How are you ensuring privacy boundaries are respected?

We’ve designed everything around search boundaries. Even though two employees work at the same company, the AI agent only has access to what the individual is authorized to see. If I install an agent, it can’t peek into your payroll or appraisal data—even within the same organization.

Advertisment

Also, our models are trained independently per customer. So, if you’re an insurance firm using Zoho CRM for years, your data won’t be used to help a competitor who signs up today. No shared data leakage.

What types of enterprise metadata or user interactions have proven most valuable in customizing AI?

We work across three types of data:

Structured (e.g., OLTP/OLAP databases),

Semi-structured (e.g., helpdesk tickets, metadata like sender/receiver/time), and

Unstructured (e.g., PDFs, video recordings, images).

Smaller models do well with semi-structured data. Structured data is already machine-readable. Large models are best suited for processing unstructured formats. That’s how we decide the level of AI intervention needed.

We’ve architected isolation by design. Each department—even within the same company—has distinct access controls. Each customer has their own AI instance. No data sharing. No leaks.

How is Zoho integrating its LLMs across the product stack?

We’ve started rolling it out with non-prompt use cases, like a simple “summarize” button in apps. Over time, we will enable prompt-based interactions. We’re working closely with pilot customers and dogfooding it internally, but we’re being cautious. We don’t want the model saying something inappropriate or inaccurate, so we’re opening it up in phases.

How are clients measuring success with your AI offerings?

The best metric so far is API usage. In 2024, we saw about 8–9 billion API calls monthly. In 2025, that number has already doubled to 16+ billion. And we don’t charge extra for this usage—it’s part of the product experience.

How do you determine the optimal model sizes for different use cases?

It depends on the context. For summarizing a paragraph, a 1.3 billion parameter model might be enough. For drafting a customer support email, you might need a 7 billion parameter model.

We auto-route tasks to the most suitable model. Users don’t pick the model manually—but their feedback (thumbs up/down) helps us reprocess tasks with a higher-capacity model if needed.

What guardrails have you built into your AI systems?

We’ve embedded guardrails at the LLM level—to ensure no biased, racist, or inappropriate outputs. Our search layer also acts as the foundation, controlling access and context. Prompts are built-in, so customers don’t have to handle safety concerns manually.

How does Zoho ensure organizational data isolation across customers?

We have architected isolation by design. Even within a company, different departments have separate data access layers. We don’t take risks. Each model instance is launched into a customer’s private ecosystem and learns only from that data. That’s core to our AI philosophy.

Finally, what’s your AI adoption playbook for a mid-sized Indian enterprise with limited ML infrastructure?

First, stop thinking departmentally. Traditionally, software has been sold to departments—and now, these departments don’t talk to each other.

So here’s the playbook:

1            Break down data silos – Build bridges between systems.

2.           Streamline processes – Automate rule-based tasks.

3.           Don’t over-rely on “magic” APIs – Build maturity step by step.

You don’t need to replace your entire stack at once. But start preparing today, and in six months, you’ll be in a better position to adopt AI meaningfully. Digital maturity leads to AI maturity. And yes—it takes time. Don’t expect results overnight.

aanchalg@cybermedia.co.in