/dq/media/media_files/2025/05/09/uI6yQ0RmKD51tTJ8KnHf.jpg)
Aditya Chhabra.
CreateBytes, the AI-native technology and product powerhouse, is leading the way to a digital future.
Aditya Chhabra, Founder & Chief Technology Officer of CreateBytes, tells us more. Excerpts from an interview:
DQ: What led you to start CreateBytes, and how has that vision evolved over time?
Aditya Chhabra: Yes, so, when I started CreateBytes, AI was mostly being used to automate repetitive tasks—think rules-based bots or simple predictive analytics. But for me, that was just scratching the surface.
I saw this huge gap between automation and intelligence—between doing faster and thinking better. That’s where the idea came from: build systems that can actually learn, adapt, and make decisions autonomously.
We started as a design-tech studio, solving very real-world problems, and that gave us a solid foundation. But over time, we evolved into a full-stack AI product company—building LLM agents, orchestration platforms, and vertical intelligence solutions.
Now, our work spans industries like defense, fintech, healthcare, and even fitness tech. The vision today? Don’t just build tools. Build distributed intelligence.
DQ: How does your R&D lab decide what to build next—and how do you know if it’s actually solving the right problem?
Aditya Chhabra: We follow something that we call recursive validation—which basically means we use our own AI agents to test other AI agents. It’s a feedback loop by design.
We run all this under CB Labs—where the rule is simple: if it doesn’t accelerate intelligence, we don’t build it. We prototype fast, internally deploy even faster, and only scale if it passes real operational stress.
Take our work on multi-agent systems for marketing workflows—we validated that using CBVision and YugYog internally before ever shipping it to clients. And lately, we’ve been applying techniques from the 2025 AutoAgent benchmark papers to refine alignment and failure tracing in complex orchestration.
It’s not just PMF in the market—it’s PMF in our own systems first!
DQ: You’re working across very diverse sectors. What keeps your innovation philosophy consistent across such different domains?
Aditya Chhabra: Great question. We’ve realized that while industries differ, the structure of intelligence problems stays remarkably consistent.
Whether it's defense ops or fintech, the core need is the same: intelligent workflows. Not just dashboards, not just alerts—systems that adapt in real time and learn from outcomes.
We build using modular AI systems—agent orchestration layers, plug-and-play pipelines, and reusable LLMs trained with domain-specific prompts. Think of it as infrastructure, not solutions.
We’re seeing strong validation of this approach in recent frameworks like Microsoft’s 2025 paper on “Reusable Cognitive Agents,” which aligns closely with our internal tech stack.
DQ: How are advancements in semiconductors influencing your AI-at-the-edge initiatives?
Aditya Chhabra: The edge is where the action is now, right? Latency, privacy, autonomy—all demand intelligence to move closer to the point of data.
At CreateBytes, we’re deploying edge agents in areas like biomechanics through KriGat, and in security infrastructure via YugYog AI. We're talking real-time posture tracking, compliance alerts, and anomaly detection—on-device, not in the cloud.
We’ve integrated quantized transformer models, run inference on Tensor cores, and actively optimize for low-power, high-throughput environments.
The 2025 EdgeFormer paper from ETH Zurich, for instance, really influenced our shift toward using sparse attention models at the edge. This isn’t theoretical—our systems are live in rehab centers, manufacturing plants, and border zones.
DQ: Can you walk us through the architecture of CB Vision and YugYog.ai, and how they’re transforming sectors like defense or healthcare?
Aditya Chhabra: Sure—CB Vision is a visual intelligence platform built for enterprise. No-code, plug-and-play pipelines, with modules like facial recognition, movement tracking, and even queue analytics.
What’s unique is that it's fully customizable, so clients in defence can deploy for perimeter monitoring, while hospitals use it for patient safety and flow optimization. The system runs on a hybrid cloud-edge architecture with an orchestration core.
YugYog.ai is our context-aware video analytics engine—built more for compliance-heavy environments. It uses vision transformers, temporal reasoning, and event annotation. For example, it can detect PPE violations, unsafe posture, or abnormal dwell time.
Together, these platforms bring a layer of intelligence to video that was previously passive. We’re essentially making video content machine-readable at a system level.
DQ: How do you see the role of generative AI and LLMs evolving in sectors like defense or surveillance?
Aditya Chhabra: The shift is already happening. LLMs are moving from being passive Q&A systems to real-time mission assistants.
We’re deploying multi-agent setups where LLMs generate SOPs, flag anomalies from field intel, and even simulate risk outcomes based on live data. One exciting angle is KG-LLM hybrid systems—especially inspired by recent DARPA-backed research on tactical LLMs with traceability layers.
When these are paired with drone video analytics or IoT edge sensors? You’re no longer just observing—you’re predicting, adapting, and guiding operations in real time. This is where LLMs are headed: autonomous mission control, not backend automation.
DQ: How has CreateBytes helped clients transition from traditional systems to truly AI-first ecosystems?
Aditya Chhabra: The way we see it, most enterprises are still stuck in "digital transformation" mode—static dashboards, fragmented data, manual handoffs. We come in with agent-first thinking. That means, every workflow has an embedded LLM agent or a vision system that acts, learns, and feeds back into the loop.
Platforms like YugYog and CB Vision have helped defence, hospitals, and manufacturers move to AI-first systems. We’ve replaced dashboards with real-time summaries, predictive prompts, and self-improving workflows.
The result? Decisions that are no longer delayed by data—they’re driven by it.
DQ: How is AI creating new job roles, especially in India’s tech ecosystem?
Aditya Chhabra: It’s honestly one of the most exciting shifts we’re seeing—and we’re living it at CreateBytes.
Roles like Prompt Engineers, Agent Orchestrators, and AI Ops Managers weren’t even a thing three years ago—and now they’re foundational to how we ship products. We’ve seen backend engineers who used to write Django APIs now fine-tuning LLM prompts for domain-specific agents.
Or, frontend devs training transformer models to optimize multi-modal user flows. It’s a total transformation of skill DNA. That’s why we’re opening up CB Academy—to help working professionals, especially those with 3 to 15 years of experience, transition into AI-native roles.
We’re building hands-on learning tracks for full-stack engineers, DevOps, and system architects—to turn them into AI designers, not just implementers.
And, we’re not alone—Stanford’s 2025 AI Ops Taxonomy paper basically confirms this: the next big wave isn’t in model training, it’s in system-level orchestration and maintenance of autonomous agents. India’s developers are perfectly placed to lead this. They’ve built scale—now they’re ready to build intelligence.
DQ: Is ethical AI about good design or strong policy? Where does governance really begin?
Aditya Chhabra: Honestly, it starts with design. By the time you’re enforcing policy, it’s already reactive.
At CreateBytes, we build agents with explainability layers, audit trails, and embedded decision trees. We follow a principle we call traceable autonomy—inspired partly by MIT’s 2025 paper on "Interpretable LLM Actions for Regulated Environments."
Policy matters—but good design enforces it automatically. Ethics isn’t a patch. It’s a protocol.
DQ: With your expansion to cities like Mumbai and Bristol, what’s your global strategy? How can India lead this AI wave?
Aditya Chhabra: Our expansion strategy is structured around distributed innovation. Mumbai and Hyderabad are engineering powerhouses, Bristol is our AI R&D hub, and Ft. Lauderdale is our deployment lab for US markets.
Each hub owns a piece of the AI stack—from LLM infra to field deployment. This allows us to stay agile, local, and global at the same time.
And India? It’s positioned to lead—not just with talent, but with systems thinking. We’re already building world-class AI from India, not just supporting it. The next wave of global AI platforms will have Indian IP at their core.