Advertisment

Shailaja Shankar on Cisco’s approach to AI-driven cybersecurity

Shailaja Shankar discusses Cisco's AI-driven cybersecurity strategy, focusing on secure AI innovation, threat mitigation, governance, and the future of AI security.

author-image
Punam Singh
New Update
Shailaja Shankar

Shailaja Shankar, Senior Vice President of Engineering, Cisco Security

Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

As artificial intelligence continues to reshape enterprise operations, the security challenges surrounding AI adoption are growing just as rapidly. Organizations face increasing risks, from AI model poisoning and deepfake-enabled phishing attacks to data privacy threats and regulatory challenges. Ensuring trustworthy, secure, and resilient AI systems has become a top priority for businesses worldwide.

Advertisment

In this exclusive conversation with DataQuest, Shailaja Shankar, Senior Vice President of Engineering, at Cisco Security, shares insights into Cisco’s security-first approach to AI innovation. She discusses the company’s Responsible AI Framework, the need for proactive security measures, and how Cisco’s AI Defense solutions are helping organizations safeguard their AI models and data from evolving cyber threats. She also explores the importance of industry-wide AI security frameworks and provides guidance for enterprises looking to adopt AI while staying compliant with security best practices.

Excerpts:

DQ: AI is evolving rapidly, and organizations are integrating it into critical operations. What are the biggest security challenges enterprises face in AI adoption today?

Advertisment

Generally speaking, every organization I speak with is eager to drive efficiency, automation, and velocity in their critical operations with AI. However, a majority of them have not established a well-defined AI strategy or put in place a governance framework to drive leadership alignment, cultural readiness, and overcoming significant technical and operational barriers.

The threat landscape has also evolved. From attacks aimed at compromising AI model integrity or data poisoning - such as prompt engineering exploits - to risks of intellectual property theft and the potential loss of sensitive or private data. Contributing to an already increasingly complex security environment are more traditional cyber like phishing attacks leveraging deep fakes, insider threats due to identity/credential compromise, and malware that evolves faster than current detection and protection systems. The attack surface clearly has grown, and adversarial AI has the potential to cause large-scale disruption to operations unlike ever before. On top of these risks, organizations must also navigate regulatory compliance challenges, amplifying the need for safeguarding AI which is a critical part of an AI strategy and adoption.

DQ: How does Cisco define "secure AI innovation," and what fundamental principles guide your security strategy in the process of development?

Advertisment

‘Secure AI innovation’ is multifaceted, but at its core, it is about securing AI technologies, integrating advanced security measures, and encouraging a culture of innovation and responsibility. By doing this, we can make sure our AI solutions are not only powerful and efficient but also safe and reliable.

Having well-established guidelines is critical, and at Cisco, this is our Responsible AI Framework which is built around six principles: transparency, fairness, accountability, privacy, security, and reliability that guide our secure AI development and innovation. While we like to see speed in innovation-to-market, we want that to be within the Responsible AI Framework, one that does not favor expediency over security.

DQ: Many security frameworks focus on AI governance after deployment. How does Cisco embed security proactively into AI models from the design phase?

Advertisment

This is where our Responsible AI Framework is applied, whether it’s developing a product or model for our customers and partners to use, integrating and building on a third-party model, or providing AI tools or services for our own internal operations. In practice, we strive to bring these principles to life throughout our entire secure software development lifecycle.

For example, we have measures in place for run-time security, and our teams incorporate security measures into AI models early in the design and development process. This means identifying and mitigating potential risks related to vulnerabilities, data privacy, and system integrity during the design phase, rather than after deployment. Security is built in by design.

DQ: Can you share how Cisco’s security solutions are helping organizations build AI models that are not only efficient but also resilient against cyber risks?

Advertisment

We take a platform-approach by embedding AI defense capabilities into our portfolio, protecting customers across the entire lifecycle of building a generative AI application. With AI Defense, we ensure bad actors aren’t poisoning the training data so the data is safe, secure, and managed properly. We ensure companies understand the security and safety vulnerabilities of models and applications at every stage of development, and the guardrails necessary to protect them in production. We provide runtime security so every question a user asks is scanned by AI Defense for safety, security, privacy, and relevance risks.

We’ve integrated AI Defense’s employee protection capabilities with Cisco Secure Access, one of our fastest-growing products that protects employee use of cloud applications. Our acquisition of Valtix was integral in our cloud protection strategy for AI Defense. If organizations are running AI workloads in the cloud, they can deploy Cisco Multicloud Defense and have AI Defense as part of that to enforce policies. For organizations running multiple cloud workloads, Hypershield provides defense between model communications. AI Defense can automatically enforce policies on top of that. 

DQ: Do you see a need for a standardized industry framework to govern AI security? If so, what role can companies like Cisco play in shaping these regulations?

Advertisment

Yes, there is a clear need for a standardized industry framework to govern AI security. It’s all about trust and building trustworthy solutions. As AI technology becomes more integrated, the risks increase, and a standard framework can help ensure that all AI systems are developed responsibly, securely and ethically.

Cisco plays a big role in this. We’ve been an active participant in the development of global standards; sharing best practices, collaborating with regulatory bodies, and coming together with other organizations in AI to develop standards. For example, Cisco developed the AI Security Incident Collaboration Playbook with the U.S. Cybersecurity & Infrastructure Security Agency, other AI providers and security vendors, and critical infrastructure owners and operators. The playbook will help coordinate AI security incident response between governments, industry peers, and global partners.

DQ: AI security is not just about cyber threats but also about bias, transparency, and ethical concerns. How does Cisco address these aspects while ensuring AI-driven decisions remain accountable?

Advertisment

This is an integral part of our Responsible AI framework: transparency, fairness, accountability, privacy, security, and reliability. With AI, achieving better decisions requires assurance that the training data represents the demographics of individuals or groups across the full spectrum of diversity to which AI will be applied, and we strive to identify and remediate any bias within our algorithms, training data, and applications. This framework requires our teams to account for these impacts from the very beginning of development through the end of the AI lifecycle.

Accountability measures include requiring documentation of AI use cases, conducting impact assessments, and having appropriate oversight by a group of our cross-functional leaders. There are also mechanisms in place for our customers to provide feedback and raise any concerns for review and action, and we regularly update these practices. This feedback loop is critical to ensuring we are responsive as well as have a human engaged in systems evolution.

DQ: What advice would you give to enterprises looking to adopt AI while ensuring they remain compliant with security best practices?

First and foremost, take time to establish an AI strategy and AI governance framework. Invest in AI Security to better understand and mitigate risk on both user and application levels. Last year, we published our Cisco AI Readiness Index which provided insight into the state of enterprise AI adoption. It showed a common trend: despite growing pressures to leverage and deploy AI, readiness seems to be declining. There were many reasons behind this but concerns around safety and security were among the most prominent.

Investing in AI security can address practical concerns about sensitive data exposure from employees sharing intellectual property, PII, and other confidential information with unsanctioned third-party AI tools. It can also help businesses developing and deploying their own AI applications address a variety of vulnerabilities to ensure these systems are safe and secure.

DQ: How does Cisco see AI security evolving in the next five years, and what key trends should businesses be prepared for?

Over the next few years, threat actors will continue to emulate real-world investments, as they have historically. We will see the rise of autonomous systems, the entry of rapidly developed, next-generation AI applications, and applications leveraging AI-assisted coding tools. The AI ecosystem will evolve quickly, while vendors will deliver increasingly automated threat detection and response capabilities to counter AI-driven cyberattacks. These efforts will aim to ensure that AI model integrity, identity of people and things, and data security are well protected.

Advertisment