/dq/media/media_files/2025/07/31/huzefa-motiwala-2025-07-31-14-55-02.jpg)
Huzefa Motiwala, Senior Director, Technical Solutions, India and SAARC, Palo Alto Networks
By 2025, the cyber threat landscape has hit a crescendo and tipping point. Now adversaries have generative AI artefacts like WormGPT, attacks are increasingly sophisticated, scalable, and harder to detect than ever before. From AI crafted phishing emails, and voice cloned deepfakes, to polymorphic malware that morph faster than a static edition of traditional defences can respond, security teams are pushed to the limit.
In response, the cybersecurity industry is leveraging AI quickly and is accepting the necessity of AI not simply as a defensive action, but as the centerpiece of autonomous security operation capability. Huzefa Motiwala, Senior Director of Technical Solutions (India and SAARC) at Palo Alto Networks, explains how the issue for defenders is no longer about if, but at what rate they can keep up with and out lever adversaries in an increasingly intelligent arms race.
Today’s polymorphic threats can shift behavior, appearance, and intent in real time—far faster than traditional SOC workflows can interpret or respond. Static playbooks and manual investigation methods are no longer sufficient.
How are cybercriminals leveraging AI to enhance phishing, malware evasion, and deepfake scams in 2025?
Today we’re seeing cybercriminals increasingly use AI as a force multiplier. Most commonly, in enhancing phishing, crafting deepfakes, and generating polymorphic malware that evades traditional detection methods.
In particular, deepfake attacks—especially those using advanced voice cloning—are expected to rise significantly in 2025, as Generative AI (GenAI) becomes more accessible and enables more convincing, large-scale social engineering campaigns. Tools like WormGPT are lowering the barrier for attackers to launch sophisticated operations with minimal effort.
These methods are increasingly effective because they manipulate the human element of security, leveraging emotion, urgency, and perceived familiarity. Beyond known threats, the real challenge now is in detecting the nuanced and fast-changing tactics AI can produce in real time. In some instances, attackers can extract sensitive data within a few hours.
To keep pace, defenders must evolve their strategies with equal speed—relying on intelligence, automation, and contextual awareness to navigate a threat landscape that’s no longer static but shaped by constantly evolving systems.
Do you believe the cybersecurity industry is entering an “AI vs AI” arms race between attackers and defenders?
GenAI is now being used to automate and scale attacks, requiring defenders to adopt equally advanced countermeasures. In fact, GenAI-related data loss prevention (DLP) incidents more than doubled over the course of 2025, now accounting for 14% of all data security incidents.
GenAI is now being used to automate and scale attacks, requiring defenders to adopt equally advanced countermeasures. In fact, GenAI-related data loss prevention (DLP) incidents more than doubled over the course of 2025, now accounting for 14% of all data security incidents.
The key lies in using trusted data and intelligent automation to reduce noise and surface what truly matters. From our experience, meaningful progress comes when security operations shift from chasing alerts to prioritizing context—enabling teams to investigate and respond more efficiently. Instead of overwhelming analysts with sheer volume, the focus should be on filtering signals from noise in real time.
For example, while our SOC processes over 36 billion events daily, it distills them into just 133 potential incidents — with only eight requiring manual review. This level of intelligent triage allows teams to focus their attention where it truly matters; on the few incidents that demand expert intervention.
What’s also becoming clear is that as enterprises integrate AI more deeply, they’re encountering a new layer of operational and security complexity. The risks don’t stop at AI-generated threats; they extend to how the AI itself is being built, accessed, and governed. That’s where innovations like Prisma AIRS come in; where we’re moving from securing against AI to securing the AI entire ecosystem itself.
With AI enabling polymorphic attacks and evasive malware at scale, what changes are needed in existing SOC workflows to effectively detect and respond to threats that evolve faster than human analysts can process?
Today’s polymorphic threats can shift behavior, appearance, and intent in real time—far faster than traditional SOC workflows can interpret or respond. Static playbooks and manual investigation methods are no longer sufficient. SOCs must evolve from reactive response centers into autonomous operations that learn continuously, prioritize effectively, and act in real time. This means embedding AI at the core of detection and response—not as an add-on, but as an engine that reduces alert fatigue and accelerates contextual understanding.
We’ve seen how AI-generated threats can overwhelm SOC teams with alert noise and slow down detection. By adopting intelligent automation and context-aware workflows, teams can cut investigation time and surface hidden threat chains more effectively. But that’s only one side of the equation.
Additionally, defending against polymorphic threats requires stress-testing your own environment the way an attacker might—using red teaming approaches that are dynamic, continuous, and AI-driven. This kind of proactive strategy allows security teams to uncover potential blind spots before adversaries do.
One thing’s for certain: the future of SOCs isn’t more analysts or more alerts. It’s smarter, adaptive systems that evolve just as fast as the threats they’re built to stop.
Autonomous security operations are gaining momentum with platforms like Cortex XSIAM — what challenges do organizations face when transitioning from traditional SIEM/XDR setups to truly AI-driven, autonomous threat response systems?
Moving towards genuinely autonomous requires a change in perspective, processes, and confidence in AI-generated results. Many teams are used to manual investigations and rule-based alerts, so handing over decisions to AI systems can feel like a loss of control. That uncertainty can lead to hesitation, especially when there’s a lack of transparency around how AI reaches its conclusions.
Moving forward, organizations need to build trust in the system—ensuring clear auditability, strong governance, and visibility into AI-driven actions.
Another challenge we often observe is maintaining data integrity and preventing model drift, particularly when organizations rapidly scale AI without adequate governance. Teams need to rethink how they measure success—moving from alert volumes to response outcomes and train analysts to work alongside AI, not around it.
In complex cloud-native environments, how is AI helping close visibility and response gaps across assets, identities, and APIs—especially when telemetry is fragmented across hybrid and multi-cloud setups?
In complex cloud-native environments, fragmented telemetry across assets, identities, and APIs often create blind spots that attackers can exploit. AI helps close these gaps by ingesting data from hybrid and multi-cloud systems, identifying patterns, and detecting anomalies that traditional tools may miss.
By establishing behavioral baselines, AI can surface abnormal identity activity or unusual API usage with far greater accuracy. As environments grow more dynamic, AI is becoming essential for enabling proactive, automated responses across the entire attack surface. It allows security teams to focus on critical threats while reducing noise and increasing coverage across an otherwise overwhelming operational landscape.
Risk quantification remains a key challenge with AI-based security investments. How can outcomes from AI-driven threat detection be effectively measured or benchmarked against traditional controls to justify strategic adoption?
Traditional security controls were often judged by volume—how many alerts were generated, how many logs were collected. But AI fundamentally shifts the conversation from quantity to quality of detection. Today, outcomes, not outputs, should guide security measurement. This includes metrics like how fast can you detect a threat, how accurately can you prioritize it, and how much noise can you eliminate for your teams.
To measure the effectiveness of AI-driven detection, we need to move toward metrics like mean time to detect (MTTD) and respond (MTTR), reduction in false positives, and analyst efficiency. These aren’t always flashy numbers—but they’re far more meaningful.
To measure the effectiveness of AI-driven detection, we need to move toward metrics like mean time to detect (MTTD) and respond (MTTR), reduction in false positives, and analyst efficiency. These aren’t always flashy numbers—but they’re far more meaningful in determining whether an organisation’s security posture is improving.
Ultimately, AI’s job isn’t just to find more threats—it’s to help you act on the right ones, faster. That’s where the real return lies.
aanchalg@cybermedia.co.in