Government and academia's role in "AI for All"

Hrishikesh Dewan, CEO of Ziroh Labs, explains how Kompact AI achieves up to 3x performance on standard CPUs through advanced algorithmic and hardware-level optimisations, without sacrificing quality.

author-image
Punam Singh
New Update
Ziroh Labs

 Hrishikesh Dewan, Co-founder and CEO of Ziroh Labs

Listen to this article
0.75x1x1.5x
00:00/ 00:00

The mission to democratize AI, especially in India, requires governments and academic institutions to be key enablers.

A significant hurdle has been the over-reliance on GPUs, limiting accessibility. By decoupling AI from this dependency and enabling high-performance execution on readily available CPUs, Kompact AI addresses a major part of this challenge.

India's vast pool of talent can create high-impact, society-focused AI solutions. With more accessible AI, federal institutions are empowered to build, deploy, and scale AI in-house, reducing dependence on foreign resources and ensuring solutions are tailored to local needs, thereby achieving the goal of "AI for All."

With these same thoughts in mind, Dataquest interacted with Hrishikesh Dewan, Co-founder and CEO of Ziroh Labs, for deeper insights.

Excerpts:

How does the Kompact AI platform achieve a 3x performance increase on standard CPUs without compromising on quality? What specific technical innovations enable this capability?

Kompact AI achieves up to 3x performance improvements on standard CPUs through a combination of rigorous algorithmic optimisations at both the model and hardware levels. The platform is built on years of research in mathematical and computer science, distributed systems, resulting in new techniques that maximise underlying CPU efficiency without affecting model accuracy. Key innovations include:

    • Optimised model execution: Rewriting model operations to exploit CPU parallelism, vectorisation, and cache efficiency.
    • Intelligent memory management: Reducing overheads for large context windows and leveraging KV caching effectively.
    • Hardware-aware scheduling: Algorithms adapt dynamically to CPU architecture characteristics to fully utilise cores and minimise bottlenecks.
    • Distributed execution: Seamless scaling across multiple CPUs while maintaining consistent model outputs.

Together, these innovations allow Kompact AI to deliver CPU-level throughput on CPUs while preserving model quality and accuracy.

The partnership with IIT Madras and the launch of the Centre of AI Research (CoAIR) is a major part of your strategy. How is this collaboration translating academic research into real-world, scalable products for the market?

We are grateful to IIT Madras, Professor V Kamakoti ( Director, IIT Madras) and Professor Madhusudhanan Baskaran (Chief Data and AI Strategist, IIT Madras Pravartak) for their continued support and belief in our vision from the outset. Their rigorous validation of our research and methodologies has been invaluable in ensuring that our solutions are both scientifically sound and practically viable.

Through this collaboration, Kompact AI has been able to translate cutting-edge academic research into scalable, real-world AI products. The platform addresses a critical gap in AI adoption caused by CPU dependency, enabling high-performance model deployment on CPUs. This allows enterprises and developers to:

    • Run large language models up to 32 billion parameters cost-effectively without relying on expensive CPU infrastructure.
    • Scale AI deployments sustainably, bridging the divide between research innovation and practical implementation.
    • Fine-tune models on domain-specific data efficiently, making AI more relevant to specific applications.

By grounding our innovations in rigorous research and testing, this partnership ensures that Kompact AI delivers robust, accessible, and scalable AI solutions for a global audience.

Could you share some examples of how Kompact AI is being used in practice? Ïor instance, how does it address the unique AI accessibility challenges in rural or low-resource settings in India?

At present, our focus is on helping enterprises shift their AI applications and workflows to CPUs, enabling high performance without reliance on CPUs. This is where Kompact AI is already creating measurable impact. With Kompact AI, enterprises, startups, and academic institutions can now deploy models on readily available CPU systems, whether on-premise or in local data centres, making AI adoption viable even in low-resource environments. Although we only came out of stealth in September, these directions have already validated our vision and strengthened our resolve to make AI universally accessible.

Many believe the future of AI belongs to large language models with billions of parameters, which require immense computational power. How do you see the CPU-driven AI movement and your focus on models under 50 billion parameters, challenging that paradigm?

We believe that smaller, domain-specific models are the future for many enterprise use cases. While trillion-parameter LLMs are impressive, most organisations don’t require that scale to solve their specific problems.

For instance, a general-purpose LLM might generate jokes or summarise news, whereas an insurance firm requires an AI assistant that can address customer queries about policies, claims, or offerings. A smaller LLM fine-tuned on domain-specific data can handle these tasks effectively, often outperforming a generic large model for that context.

Smaller models also bring several practical advantages:

    • Efficient deployment: They can run on CPUs or lower-tier hardware without needing expensive CPUs.
    • Cost-effectiveness: Reduced computational requirements, lower infrastructure costs, and made scaling more feasible.
    • Faster fine-tuning and iteration enable enterprises to adapt models quickly to evolving business needs.

By focusing on models under 50 billion parameters, Kompact AI enables high-performance, accessible AI while challenging the paradigm that bigger is always better.

The high cost and energy consumption of CPUs have created a significant barrier to entry for many companies. How do you believe frugal innovation and democratizing access to AI will reshape the global competitive landscape over the next five years?

Democratizing AI means providing equitable access to advanced AI technologies. When everyone, from small startups to large enterprises, can use and build AI within their own budgets, it removes monopolies and entry barriers, making the playing field inherently more competitive.

Accessibility empowers people to figure out their own ways to make AI useful, both for their organisations and for society at large.

Innovation further flourishes when no one is constrained by a lack of resources, models, or capital, leading to more inclusive and impactful AI applications.

This enables a broader set of innovators to:

    • Build solutions tailored to their specific needs,
    • Experiment and iterate without resource constraints, and
    • Drive AI adoption in diverse sectors and communities.

Over the next five years, frugal innovation and accessible AI will reshape the global competitive landscape by enabling widespread experimentation, tailored solutions, and societal impact, levelling the playing field for all participants.

What role do you see governments and academic institutions playing indemocratizing AI, particularly in a country like India, which aims for "AI for All"? What is the next big hurdle that needs to be addressed in this area?

A significant hurdle in democratizing AI has been the over-reliance on specific infrastructure, particularly CPUs, which has limited accessibility and created a divide in who can leverage AI effectively. By decoupling AI from CPU dependency and enabling high-performance execution on CPUs, Kompact AI have addressed a significant part of this challenge.

Governments and academic institutions play a vital role in this ecosystem. India has a large pool of brilliant students, professors, and AI practitioners capable of creating high-impact, society-focused AI solutions. With AI becoming more accessible, federal institutions are free to build, deploy, and scale AI in-house across the country, thereby reducing dependence on foreign resources and implementing AI solutions tailored to local needs.