Self-defending systems are no longer a vision, but a reality in the making

As AI reshapes the cyber battlefield, Sundar Balasubramanian, Managing Director, Check Point Software Technologies, India & South Asia, warns that attackers are moving faster, smarter, and cheaper—forcing organizations to embrace AI-led, autonomous defense systems to stay ahead.

author-image
Aanchal Ghatak
New Update
Attacks
Listen to this article
0.75x1x1.5x
00:00/ 00:00

How are cybercriminals leveraging AI to enhance phishing, malware evasion, and deepfake scams in 2025?

Advertisment

In 2025, we are seeing how cybercriminals are utilizing AI as a force multiplier to increase in both scale and sophistication. Generative AI is being harnessed to drive highly personalized phishing via email, text, and voice message that outruns traditional filter capabilities, while deepfakes are fueling impersonation scams, CEO fraud, and even voter manipulation from India for less than ₹8.

Malware has evolved as well, with AI-driven polymorphic code that constantly changes itself to evade detection. Attackers are now taking advantage of vulnerabilities with "precision," as they analyze social media, behaviours, and API exposure to find an attack surface. India is, in fact, one of the top 10 most targeted nations, reporting 99 ransomware cases in 2024 and 3,278 weekly cyberattacks (which is significantly more than the global weekly average, which is only 1,934).

To keep step with AI driven threats, organizations are going to need to adopt AI powered defences in all aspects of their operations, ranging from SOCs, to Zero Trust, to employee awareness programs that matter.

Advertisment

What role does AI play in your current threat detection and response solutions — can you share any recent real-world examples?

Check Point has incorporated artificial intelligence, or AI, into its cybersecurity strategy since 2014, well before it became a buzzword in the industry. Check Point's AI-powered ThreatCloud operates over 55 detection engines and makes over 4 billion security decisions each day across networks, cloud, mobile, and IoT. Many of Check Point's recent product developments centre around AI, including its new Infinity AI Copilot, which automates up to 90% of routine security operations and reduces incident response times by over 80%. The company's GenAI Protect secures users' use of generative AI by preventing data leakage, protecting their intellectual property, and ensuring compliance. Check Point understands that AI should not be understood just as automation; it is proactive, intelligent defense. High levels of AI-powered security will not be accomplished alone; Check Point will continue to rely on Microsoft and NVIDIA for end-to-end security.

Do you believe the cybersecurity industry is entering an “AI vs AI” arms race between attackers and defenders?

Advertisment

Yes, the cyber security landscape is rapidly evolving into an AI-powered arms race between attackers and defenders. On one side, cybercriminals are leveraging generative AI to launch smarter, faster, and more personalized attacks—from deepfake-powered scams and AI-generated phishing campaigns to polymorphic malware that constantly mutates to evade detection.

These tools allow bad actors to scale their efforts with alarming speed and precision. As a result, traditional security mechanisms are under immense pressure. The malicious use of AI is no longer experimental—it’s effective, cheap, and already in play.

On the other hand, AI is equally becoming a powerful force for defenders. It enables real-time threat detection, behavioral analytics, and predictive response—helping organizations prevent breaches before they occur. For instance, Check Point’s ThreatCloud AI makes over 2 billion decisions daily to counter threats at scale. However, this is no longer a static game of catch-up; each defensive AI advancement prompts adversaries to develop smarter offensive strategies, creating a relentless cycle of innovation. The key for organizations lies in adopting a layered AI-driven security approach—fortified with human oversight, resilient data, and cross-industry collaboration—to stay a step ahead in this high-stakes digital battleground. 

Advertisment

What is your view on the future of autonomous cybersecurity — are we heading toward self-defending, AI-led systems?

Absolutely—we are not just heading toward self-defending, AI-led systems, we are actively building them. The future of cyber security is autonomous. It marks a shift from reactive defense to proactive, intelligent security—where AI doesn’t merely assist but becomes the core engine driving protection. We’re seeing a clear evolution from manual, human-dependent processes to AI-assisted and eventually fully autonomous systems that can detect, adapt, and respond to threats in real time, without human intervention. This transformation isn’t just about efficiency—it’s about survival in a hyperconnected world where threats evolve faster than humans can keep up.

Autonomous cyber security is no longer a distant vision—it’s already taking shape. Each step forward unlocks greater resilience, faster response times, and stronger defenses. As we build systems that can self-learn, self-heal, and even simulate adversaries through AI-led red teaming, we’re fundamentally reimagining what cyber security means in the age of AI. At Check Point, we see this not just as an evolution—but as a revolution that redefines the nature of protection, resilience, and trust. 

Advertisment

As attackers begin integrating LLMs and reinforcement learning into toolkits, how is your organization adapting threat intelligence models to detect and counter algorithmically adaptive malware or autonomous intrusion attempts?

As cyber threats grow more intelligent and autonomous through the use of LLMs and reinforcement learning, we are evolving our threat intelligence strategy by deeply integrating AI and machine learning across our detection and response ecosystem. This strategy is operationalized through Check Point’s advanced threat detection framework, built on three pillars: intelligence gathering, verification, and automated response—each enhanced by adaptive algorithms capable of analyzing patterns in real time.

Our AI-driven systems continuously learn from historical data, identify deviations from normal behavior across networks, endpoints, and user activities, and adjust detection criteria accordingly. Tools like SIEM, NDR, and XDR are now augmented with context-aware AI engines that cross-reference internal activity with third-party threat intelligence, dramatically reducing false positives while detecting sophisticated threats like polymorphic malware or lateral movements. Platforms like Check Point Infinity and our Quantum offerings bring this intelligence to life—enabling faster, more precise responses to algorithmically adaptive and autonomous threats.

Advertisment

Have you seen evidence of malware dynamically rewriting itself using AI in the wild?

Response: Yes, there are early signs that AI is being used to enable malware to dynamically rewrite or mutate itself in the wild. One recent example is the group FunkSec, which Check Point Research linked to over 170 attacks in Q1 2025. Evidence suggests that its malware was likely built or enhanced using AI tools—allowing it to generate variants with minimal effort, adapt to different environments, and evade detection mechanisms.

While traditional polymorphic malware could alter its code superficially, AI brings a new level of sophistication, enabling malware to intelligently modify its structure or behavior in response to security controls. This represents a significant shift—lowering the technical barrier for attackers and making malware more agile and unpredictable. As this trend evolves, it will challenge conventional defense methods and further emphasize the need for AI-powered, behavior-based detection systems.

Advertisment

What techniques or architectures are most effective today in detecting adversarial AI behavior — particularly when dealing with polymorphic code, AI-generated phishing, or LLM-injected social engineering payloads?

Detecting adversarial AI behavior—particularly in the form of polymorphic code, AI-generated phishing, or LLM-injected social engineering payloads—requires a combination of advanced techniques and resilient architectures. Security teams are increasingly relying on adversarial training, AI red teaming, and anomaly detection models to expose vulnerabilities and defend against manipulated inputs or evasive attack patterns. Federated learning and ensemble modeling further strengthen system robustness by reducing the risk of single-point failure or data poisoning. Additionally, explainable AI (XAI) techniques such as SHAP and LIME provide critical visibility into model behavior, enabling teams to spot and respond to misleading prompts or bias-driven outcomes.

Architecturally, effective detection is supported by runtime behavioral monitoring tools, including polymorphic-aware sandboxes, which honeypots, and decoy LLM endpoints, which help uncover stealthy or shape-shifting attack patterns. Organizations are also adopting AI-specific governance frameworks—like the OWASP Top 10 for LLMs and Google’s Secure AI Framework (SAIF)—to systematically identify and mitigate risks across the AI lifecycle. Embedding AI into Security Operations Center (SOC) workflows enables continuous monitoring, automated threat correlation, and real-time remediation, making it possible to quickly detect and neutralize adversarial AI threats before they escalate.

Do you anticipate a shift from reactive defense (like XDR/SIEM) toward proactive “predictive threat modeling” using generative AI? What are the risks of false positives/negatives in such AI-led defense systems?

Yes, we are seeing a clear shift from reactive defense tools like XDR and SIEM toward more proactive, predictive threat modeling powered by Generative AI. These AI-driven models can simulate attacker behavior, identify potential vulnerabilities early, and help security teams anticipate threats before they materialize. At Check Point, we’re actively exploring these capabilities to enhance preemptive threat detection and response.

That said, AI-led defense systems come iwith risks—notably false positives and false negatives. False positives can overload analysts and lead to alert fatigue, while false negatives may allow sophisticated attacks to slip through. To address this, it’s critical to pair AI insights with human oversight, continuous model training, and strong validation frameworks to ensure accurate, reliable threat detection.