How Cisco Is re-architecting for the AI-first world?

Jeetu Patel outlines Cisco’s vision for "AI in every loop," utilising AgenticOps and digital twins to automate infrastructure. The strategy focuses on secure token generation and simplifying upgrades for AI readiness.

author-image
Punam Singh
New Update
Jeetu Patel
Listen to this article
0.75x1x1.5x
00:00/ 00:00

At a time when enterprises are moving from AI experimentation to operational deployment, the conversation around autonomy, governance, and infrastructure maturity is becoming central to technology strategy. Networking, long considered a foundational layer, is now being re-imagined through agentic systems that can observe, decide, and act in near real time. In this interview, we explore how this shift is redefining operational models, from infrastructure management to enterprise readiness, and what it means for organisations navigating the next phase of AI adoption.

Advertisment

Jeetu Patel, President and Chief Product Officer at Cisco, outlines a vision where AI moves from assisting humans to becoming an embedded operational layer across networks. He discusses Cisco’s “AI in every loop” philosophy, the role of AgenticOps and AI Defence in ensuring trust and governance, and why token generation capability, digital twins, and simplified infrastructure upgrades will be critical for enterprises seeking to translate AI ambition into measurable outcomes.

Cisco has moved from a human-in-the-loop model to an “AI in every loop” philosophy. With AgenticOps enabling autonomous agents to execute network changes, how is Cisco solving the agent integrity problem?

The way you have to think about this is, what is the problem we are trying to solve? The problem we are trying to solve is to simplify the management of infrastructure through AgenticOps.

Advertisment

Rather than a human figuring out where everything goes wrong, you have agents that are almost ambient in your network that are monitoring your network at all times. They identify anomalies, and based on that anomaly, they might identify things you want to do to remediate it.

The remediation would not just be rolled out into production. You would have a digital twin where you run recorded live data and past production data through it to see if the change affects the system. If things look acceptable, then and only then do you roll it out into production.

As agents become autonomous, how does Cisco perform background checks on agents to ensure they remain governed and do not create security vulnerabilities or new digital divides?

That is precisely the reason we have AI Defence: to validate that models are working the way they are intended. When you see drift in the model, hallucination, or bad intent, you can enforce runtime guardrails.

The objective is to prevent models from going rogue by validating them continuously and algorithmically.

You introduced AI Canvas as a multiplayer workspace beyond chatbots. How is this collaborative interface designed to augment human intention rather than replace it?

In these systems, agents become teammates. They do three things.

  • They perform jobs you do not want to do or do not have time to do.
  • They perform jobs you might not be as effective at doing, such as repetitive work.
  • They perform jobs you cannot do, such as correlating very large datasets.

The goal with AI Canvas is to provide a generative UI and dashboard so that when people troubleshoot, the system helps surface the right information, not only to an individual but to the team. That is why it is designed as a multiplayer approach.

Cisco’s AI Readiness Index shows that many Indian companies want to deploy agents, but only a small percentage are ready. What is Cisco’s strategy to help the mid-market reduce AI infrastructure debt without replacing legacy systems?

Every country will need a token generation capability. To be competitive, you need to generate tokens safely, securely, economically, and efficiently because that will directly impact economic prosperity and national security.

The infrastructure for AI is fundamentally different from classical workloads, so upgrades are necessary. The objective is to make that upgrade process plug-and-play so it is not complicated for organisations of any size.

Regarding Silicon One G300, how are you balancing large-scale AI infrastructure demands with energy efficiency and ROI?

The G300 chip is not only for hyperscalers. We are using it in scale-out architectures for enterprises as well. By serving hyperscalers, we can take the learnings and apply them to enterprise deployments.

The G300 is a scale-out chip for the Cisco 8000 series switches and will also be available for Nexus 9K platforms used by enterprises.

Cisco has projected significant AI-driven revenue, yet many customers remain in experimentation. Beyond hardware, how is Cisco evolving consumption models to ensure productivity gains?

What is happening in the industry is a capabilities overhang, meaning features and capabilities are being built faster than organisations can absorb them. Because coding is becoming a rapid use case, AI will increasingly be built with AI, creating an exponential pace of innovation.

As that happens, the adoption and absorption practices must mature. Infrastructure complexity, security, and safety should not hold back adoption. If those concerns are minimised, organisations can focus on building applications that deliver direct value to customers.