/dq/media/post_banners/wp-content/uploads/2023/12/AI.jpg)
What should enterprises remember before onboarding AI? How much does the security aspect change?
Our ancestors would travel miles with their precious gems and silks. Hence, the dacoits and pirates. Then their progenies would hide their treasures underground or tuck them inside well-built houses. Hence, the burglars. Then, there were banks and their well-bolted lockers. Hence, the robbers. Then, money changed into the form of expensive art. Hence, the conmen and their heists. And then, the very nature of our treasures became intangible - but more critical. Hence, all those malware, ransomware and DDoS attacks. With AI’s advent, what we guard, and how we guard it will change in a massive way.
What we protect? Who from?
AI’s first impact is on the very gravity and texture of data. The yesteryear boxes of servers, storage, disks and software are still grappling with the new face of tenants that are approaching the storage towns. Data’s speed, transience, opacity, implications and real-time fragility are going to be so different in the AI age.
As Zscaler’s Suvabrata Sinha, CISO-in-residence, India points out - Generative models can now craft hyper-personalised phishing, create social engineered-campaigns, and even mimic voices using deepfakes. “What makes this especially dangerous is that these threats don’t require high skill, just access to the right tools.”
Attacks in the past were mostly manual, based on rules, and developed more slowly, reminds Dipesh Kaura, Country Director, India & SAARC, Securonix. “However, in the age of AI, threat actors employ AI to carry out automated, adaptive attacks (such as deepfake-driven phishing and WormGPT), rendering conventional detection methods inadequate.”
And it’s not that we are fighting against better-armed criminals only? As Diwakar Dayal, Managing Director & Area Vice President – India & SAARC, SentinelOne slices it, in today’s AI-powered threat landscape, enterprises are no longer just defending against malicious actors, they’re defending against intelligent, adaptive machines. “We are seeing a sharp rise in polymorphic malware, AI-generated phishing, and deepfakes that can evade traditional defenses with unprecedented speed and precision.”
“Today, threat actors are weaponising the same tools to generate polymorphic malware, automate attacks, and exploit vulnerabilities faster than traditional defences can respond, notes Vaibhav Dutta, AVP & Global Head – Cybersecurity Products & Services, Tata Communications.
Attackers are investing heavily in automation, reconnaissance, and scalable operations. Their playbooks emphasise speed, stealth, and scalability, while far too many organisations remain overburdened with reactive patch cycles and static security strategies.
Vivek Srivastava, Country Manager, India & SAARC, Fortinet argues about the speed aspect. “Data shows adversaries are moving faster than ever, automating reconnaissance, compressing the time between vulnerability disclosure and exploitation, and scaling their operations through the industrialisation of cybercrime.”
As Shikhar Aggarwal, Chairman, BLS E-Services Ltd. adds, “Unlike the past, where threats were often static and perimeter-based defenses sufficed, today’s landscape is defined by AI-powered attacks such as deepfake social engineering, automated malware, and adversarial machine learning, which evolve in real-time and bypass conventional detection methods.”
Cyberattacks now leverage generative AI to automate phishing, mutate code, and launch real-time hyper-realistic impersonation scams at scale, weighs in Sundar Balasubramanian, Managing Director, Check Point Software Technologies India & South Asia.
Aaditya Uthappa, Co-Founder & COO, Accorian underlines the concern. “AI is transforming enterprise cybersecurity at its core - not just by escalating the volume and velocity of attacks, but by breaking the assumptions traditional security models rely on. Security practitioners need to fend against the double-edged sword - AI-powered threats & attacks.”
It is time we start getting re-oriented for these waters then.
How to protect?
As Arjun Chauhan, practice director, Everest Group advises, “Expand the threat model to include LLM input/output flows, data poisoning and model theft. Deploy layered AI-aware controls like LLM firewalls, retrieval filters, guarded RAG pipelines, confidential compute for model inference. Also mandate ML-BOM/SBOM in procurement and CI/CD, with cryptographic signing of datasets and model artefacts. And operationalise AI red-team/blue-team loops before and after every major model update.”
In the opinion of Kaura, “These days, anomaly patterns and contextual insights can be used to detect unknown threats early.” Balaji Rao, Area Vice President, India & SAARC, Commvault concurs that this is the age of AI-driven cyber resiliency - proactive, predictive, and resilient, where features like anomaly detection, threat deception, and automated recovery are not optional but essential.
Vaibhav Koul, Managing Director, Protiviti Firm avers that defenders must use it to predict, prevent, and preserve trust. But enterprises must address ethical concerns like algorithmic bias and data privacy while ensuring robust adversarial training for AI models, Aggarwal stresses.
AI really enables predictive defense—ranging from dark web exposure monitoring to automated risk remediation and 99.8 per cent zero-day threat prevention, highlights Balasubramanian.
We have travelled a long road from pipe-smoking and sword-brandishing pirates to invisible AI monsters lying hidden in silent waters. On this new ship that we are on now, we don’t need just new legs. But legs that work as eyes too.
BOX: A New Playbook Required Now
Source: Corralled from various industry experts
|