/dq/media/media_files/2025/08/10/rohan-vaidya-2025-08-10-11-59-22.jpg)
Rohan Vaidya, Area Vice President, SAARC & India, CyberArk
Can you illustrate what has changed with AI’s advent in the fraud landscape? Are we witnessing new scale levels, speed and dangers now?
AI-powered tools allow attackers to automate and customise their operations, significantly reducing the time and expertise needed to execute them. For instance, phishing emails can now be generated in seconds, with hyper-personalised content tailored to specific targets, making them far more convincing. Similarly, AI enables the creation of malicious code and deepfake content that is increasingly difficult to detect with traditional security defences.
Has personalisation been underlined?
The combination of automation and personalisation amplifies the threat, empowering even less experienced attackers to orchestrate large-scale, coordinated attacks that exploit both human and machine identities. This escalation showcases the critical need for proactive cybersecurity measures, particularly around securing identities—both human and machine—which are often the first targets in AI-driven attacks.
Phishing emails can now be generated in seconds, with hyper-personalised content tailored to specific targets, making them far more convincing.
Are GenAI and LLMs being abused, especially in the creation of scams/fraud and propagating them?
Yes, that is a growing concern that has serious consequences. These technologies have reduced the barriers for cybercriminals, enabling them to produce highly sophisticated phishing emails, deepfake content, and other forms of deception at an unprecedented scale.
Any examples?
For instance, even with minimal publicly available information, such as a company name, recent project details, or an executive’s communication style, generative AI can still create highly personalised phishing emails that closely mimic legitimate corporate correspondence. These emails mirror the tone, structure, and vocabulary of genuine communications, making it significantly easier for attackers to bypass conventional security defences and manipulate employees into disclosing sensitive information or interacting with malicious links.
Even with safeguards in place, these models can be repurposed in ways that their creators did not anticipate.
The scalability of such attacks further amplifies the risk. Threat actors can target hundreds or even thousands of individuals simultaneously, significantly increasing the potential damage to organisations. This highlights the pressing need for strong security measures that not only defend against unauthorised access but also account for the evolving tactics powered by Generative AI. By addressing these vulnerabilities proactively, organisations can better protect their critical systems, sensitive data, and overall operational integrity.
Would we see more tools like FraudGPT and WormGPT?
FraudGPT and WormGPT exemplify how cybercriminals continually adapt and innovate, leveraging advancements in technology for malicious purposes. These generative AI tools empower attackers to automate and scale their operations, from crafting highly convincing phishing emails to generating malware code, all the while requiring minimal technical expertise. By lowering the entry barrier to cybercrime, such tools enable even inexperienced attackers to execute sophisticated and damaging attacks. As Generative AI becomes increasingly accessible, similar tools will likely emerge, each tailored to exploit specific vulnerabilities, industries, or systems. This alarming trend highlights the critical need for a proactive and comprehensive approach to cybersecurity.
What can AI model creators and companies do better? Is the technical complexity here too formidable to solve?
The exploitation of AI for fraudulent purposes stems from a combination of oversight and technical complexity. On one hand, there is often insufficient foresight in integrating strong guardrails during the development of AI models, such as controls to prevent misuse. On the other hand, the adaptive and open-ended nature of generative AI models makes it inherently challenging to address all potential vulnerabilities. Even with safeguards in place, these models can be repurposed in ways that their creators did not anticipate.
To mitigate these risks, AI developers and organisations must adopt a multifaceted approach. First, embedding ethical frameworks and secure design principles into AI systems from the outset is critical. This includes implementing measures such as stringent access controls, continuous usage monitoring, and mechanisms to detect and flag potentially harmful content. Second, promoting collaboration between AI developers, regulators, and the broader cybersecurity community is essential. By working together, these stakeholders can establish industry-wide standards and proactively address emerging threats.
How can the industry fight jailbreaking, adversarial prompts and data-related gaps? Are these serious issues?
Addressing challenges like jailbreaking, adversarial prompts, and data vulnerabilities in AI systems demands a multi-pronged approach. These threats exploit inherent weaknesses in AI models, enabling malicious actors to bypass security measures or manipulate systems for harmful purposes. To counter these risks, AI developers must prioritise embedding advanced guardrails that detect and neutralise adversarial inputs or jailbreak attempts. Techniques such as reinforcement learning with human feedback (RLHF) can be further refined to improve model resilience against manipulation. Equally important is ensuring strong data governance. This includes encrypting sensitive data, enforcing stringent access controls, and deploying continuous monitoring to detect unauthorised access or anomalies in data usage. Organisations should also implement advanced threat detection systems powered by AI to identify unusual patterns or behaviours indicative of adversarial attacks.
What new attack vectors or trends do you reckon today?
One critical area of focus is the emergence of machine identities as a growing attack vector. As AI models and automated systems proliferate, the number of machine-to-machine interactions increases exponentially. Securing these interactions is just as important as protecting human identities, as compromised machine identities can lead to significant breaches.
Compromised machine identities can lead to significant breaches.
What about insider threats? Especially the ones stemming from ignorance?
The role of insider threats is key—whether they are intentional or accidental. Employees or partners with access to sensitive AI tools and data can inadvertently or deliberately contribute to security gaps. Organisations must implement strong identity and access management (IAM) protocols to ensure that only authorised individuals have access to critical systems and data. Additionally, awareness and education are vital. Organisations must ensure that their employees understand the risks posed by AI misuse, including phishing, deepfakes, and adversarial attacks, and how to mitigate them. At CyberArk, we advocate for a defence-in-depth approach to identity security, encompassing human and machine identities.
pratimah@cybermedia.co.in