/dq/media/media_files/2025/07/31/ai-cybersecurity-2025-07-31-14-39-00.jpg)
Our ancestors would travel miles with their precious gems and silks. Hence, the dacoits and pirates. Then their progenies would hide their treasures underground or tuck them inside well-built houses. Hence, the burglars. Then, there were banks and their well-bolted lockers. Hence, the robbers. Then, money changed into the form of expensive art. Hence, the conmen and their heists. And then, the very nature of our treasures became intangible—but more critical. Hence, all those malware, ransomware and DDoS attacks. With AI’s advent, what we guard, and how we guard it, will change in a massive way. None of our currencies, luxury stones, sculptures, sneakers, gates, walls, trunks, bouncers, locks or Ninjas will be relevant anymore. Because this time we do not know who we are guarding and from whom? At least, not that clearly—for now. Are we ready for the security turbulence of AI–both as a force of offence and defence—as we embrace its eye-popping breakthroughs? Is AI security really different from erstwhile cybersecurity? What will change now–the gems or the dacoits?
Both—and also the way we guard them. Let’s unbox this future today. Or at least, try to.
The Chests Change–End Everything Follows
AI’s first impact is on the very gravity and texture of data. Yesteryear’s boxes of servers, storage, disks and software are still grappling with the new face of tenants that are approaching the storage towns. Data’s speed, transience, opacity, implications and real-time fragility are going to be so different in the AI age.
All this injects a new degree of criticality in how we secure our AI information trunks.
As Zscaler’s Suvabrata Sinha, CISO-in-residence, India, points out, Generative models can now craft hyper-personalised phishing, create social-engineered campaigns, and even mimic voices using deepfakes. “What makes this especially dangerous is that these threats don’t require high skill, just access to the right tools.”
Attacks in the past were mostly manual, based on rules, and developed more slowly, reminds Dipesh Kaura, Country Director, India & SAARC, Securonix. “However, in the age of AI, threat actors employ AI to carry out automated, adaptive attacks (such as deepfake-driven phishing and WormGPT), rendering conventional detection methods inadequate.”
“In the past, attacks were comparatively controllable in terms of both frequency and complexity. Threat actors can now instantly scale their campaigns, increasing their volume and intensity, thanks to AI.”
- Dipesh Kaura, Country Director, India & SAARC, Securonix
And it’s not that we are fighting against better-armed criminals only. As Diwakar Dayal, Managing Director & Area Vice President–India & SAARC, SentinelOne slices it, in today’s AI-powered threat landscape, enterprises are no longer just defending against malicious actors—they’re defending against intelligent, adaptive machines. “We are seeing a sharp rise in polymorphic malware, AI-generated phishing, and deepfakes that can evade traditional defences with unprecedented speed and precision.”
“Reactive, rule-based and human-guided tools are simply not fast enough. We need a proactive, autonomous approach to cybersecurity. We need to fight machines with machines.”
- Diwakar Dayal, MD, India & SAARC, SentinelOne
“As we enter the AI-first era, enterprise cybersecurity is shifting from reactive defence to predictive resilience. In the past, security largely relied on static controls and rule-based detection. Today, threat actors are weaponising the same tools to generate polymorphic malware, automate attacks, and exploit vulnerabilities faster than traditional defences can respond, notes Vaibhav Dutta, AVP & Global Head–Cybersecurity Products & Services, Tata Communications.
Attackers are investing heavily in automation, reconnaissance, and scalable operations. Their playbooks emphasise speed, stealth, and scalability, while far too many organisations remain overburdened with reactive patch cycles and static security strategies.
Vivek Srivastava, Country Manager, India & SAARC, Fortinet picks some pages from the Fortinet’s 2025 Threat Landscape Report and how it reveals a dramatic escalation in both the scale and sophistication of cyberattacks. “Data shows adversaries are moving faster than ever, automating reconnaissance, compressing the time between vulnerability disclosure and exploitation, and scaling their operations through the industrialisation of cybercrime. Across all attack phases, FortiGuard Labs observed that threat actors are leveraging automation, commoditised tools, and AI to erode the traditional advantages held by defenders.”
“Attackers no longer have to identify vulnerabilities manually. Instead, they can leverage automated scanning, ML, and neatly-packaged exploit kits to weaponise newly disclosed security flaws within hours of discovery.”
- Vivek Srivastava, Country Manager, India & SAARC, Fortinet
As Shikhar Aggarwal, Chairman, BLS E-Services Ltd. adds, “Unlike the past, where threats were often static and perimeter-based defenses sufficed, today’s landscape is defined by AI-powered attacks such as deepfake social engineering, automated malware, and adversarial machine learning, which evolve in real-time and bypass conventional detection methods.”
Cyberattacks now leverage Generative AI to automate phishing, mutate code, and launch real-time hyper-realistic impersonation scams at scale, weighs in Sundar Balasubramanian, Managing Director, Check Point Software Technologies India & South Asia. “Vulnerabilities are exploited faster than traditional tools can respond. In turn, defenders have adopted AI-driven anomaly detection, behavioral analytics, and real-time response tools.”
“Previously, organisations relied on signature-based detection and manual processes—ineffective against today’s AI-powered threats.”
- Sundar Balasubramanian, MD, Check Point India & South Asia
The way these threats manifest into impact is very fast and very deep. “At Zscaler, we’ve observed this shift firsthand; our Zero Trust Exchange platform processes over 500 trillion signals per day and has seen a nearly 600 per cent surge in enterprise AI/ML traffic over the past year, reaching over 3 billion monthly transactions. In 2024 alone, enterprises blocked 59.9 per cent of AI-related transactions, reflecting significant concerns around data leakage and compliance risks. These numbers make it clear: AI is both a powerful tool and a potential threat, dissects Sinha.
“AI security is not just the progress of traditional security, but a complete redefinition of how we approach cybersecurity.”
- Suvabrata Sinha, CISO-in-residence, Zscaler
Srivastava also highlights how data poisoning is the third-most encountered AI-driven cyberthreat. “This growth in AI powered cyberthreats is incremental beyond the traditional threats that have plagued organisations, such as ransomware, malware, and phishing.”
Aaditya Uthappa, Co-Founder & COO, Accorian underlines the concern. “AI is transforming enterprise cybersecurity at its core—not just by escalating the volume and velocity of attacks, but by breaking the assumptions traditional security models rely on. Security practitioners need to fend against the double-edged sword—AI-powered threats & attacks.”
How do we start getting re-oriented for these waters then?
We need Not Just Gatekeepers, But Commandos
Just being strong, precise, agile, and ruthless will not suffice anymore on the defence side. It is time to add pre-emptive punches, hard-to-break resilience and real-time eyesight now.
As Arjun Chauhan, practice director, Everest Group advises, enterprises will have to include a lot more areas now in their approach than they did before. This means a host of things. “Expand the threat model to include LLM input/output flows, data poisoning and model theft. Deploy layered AI-aware controls like LLM firewalls, retrieval filters, guarded RAG pipelines, confidential compute for model inference. Also mandate ML-BOM/SBOM in procurement and CI/CD, with cryptographic signing of datasets and model artefacts. And operationalise AI red-team/blue-team loops before and after every major model update.”
“It is also important to align with emerging standards (ISO 42001, NIST AI RMF) and be ready to self-assess under the EU AI Act high-risk tier.”
- Arjun Chauhan, Practice Director, Everest Group
Sinha suggests that it is no longer about guarding a fixed perimeter, but about continuously verifying identity, context, and behavior before granting access. Kaura calibrates that in the past, detection mainly depended on static rules and recognised attack signatures, frequently identifying threats only after harm had been done. “These days, anomaly patterns and contextual insights can be used to detect unknown threats early.” Also, in the past, humans were in charge of investigations, triage, and decision-making. AI now performs threat correlation, first-level triage, and action recommendation, he indicates.
Balaji Rao, Area Vice President, India & SAARC, Commvault concurs that this is the age of AI-driven cyber resiliency—proactive, predictive, and resilient, where features like anomaly detection, threat deception, and automated recovery are not optional but essential.
Vaibhav Koul, Managing Director, Protiviti Firm also echoes that direction. “AI is transforming both sides of cybersecurity. Threat actors use it to automate, adapt, and attack at scale—so defenders must use it to predict, prevent, and preserve trust.” But enterprises must address ethical concerns like algorithmic bias and data privacy while ensuring robust adversarial training for AI models, Aggarwal stresses.
What’s interesting ahead is that AI enables predictive defense—ranging from dark web exposure monitoring to automated risk remediation and 99.8 per cent zero-day threat prevention, highlights Balasubramanian. “As threats and defenses accelerate to machine speed, the future of cybersecurity lies in responsible AI, unified visibility, and Zero Trust enforcement—securing not just against AI, but with it.”
We have travelled on a long road from pipe-smoking and sword-brandishing pirates to invisible AI monsters lying hidden in silent waters. Guarding any new object in the AI era would be a tough mission for tomorrow’s Ethan Hunts or Frank Martins to accept. Not because these heroes are any less brave, lightning-quick, sharp, clever or creative. But because what they protect and from whom they protect it from can sometimes be the same thing. It’s not a bottle, a briefcase, a syringe or a person anymore. More like a Labubu doll box. Blind, devilish, in various avatars, huggable and quite a doll!
pratimah@cybermedia.co.in