/dq/media/media_files/2025/08/10/praveen-kulkarni-2025-08-10-11-53-37.jpg)
Praveen Kulkarni , Director-Security, Risk & Governance, OpenText India
How potent and widespread is the abuse of GenAI and LLMs in fraud and scam creation/propagation?
The misuse of generative AI has become deeply concerning. What was once confined to clumsy, easy-to-spot scams has now evolved into highly convincing phishing emails, deepfakes, and automated social engineering at scale. We have seen a notable rise in AI-driven phishing attacks, with 45 per cent of organisations reporting an increase just last year. This is no longer theoretical–it is the daily reality security teams are grappling with. The entry barrier for malefactors has fallen significantly. They no longer require advanced technical expertise; with a prompt and a dash of malice, Generative AI can now perform the heavy lifting on their behalf. That is the essence of the issue–scale and complexity, all powered by AI. In addition to these findings, the 2024 OpenText Cybersecurity Threat Report highlights that 55 per cent of respondents believe their companies are at greater risk of suffering a ransomware attack due to the proliferation of AI usage among threat actors. This underlines the growing concern within the industry regarding AI’s role in enhancing cyber threats.
What was once confined to clumsy, easy-to-spot scams has now evolved into highly convincing phishing emails, deepfakes, and automated social engineering at scale.
Why do FraudGPT and WormGPT exist? Would we see more such tools?
FraudGPT and WormGPT emerged from the darker corners of the internet – essentially black-hat versions of ChatGPT. They exist because there is demand: a shadow economy of cybercrime seeking efficiency, and these models deliver. Whether it’s coding ransomware or generating counterfeit IDs, they execute nefarious tasks with precision. And yes, expect to see more such tools unless there is a multi-pronged strategy to clamp down – strengthened governance, ethical AI design, and enhanced sharing of threat intelligence. Our strategy at OpenText is based on proactive detection of threats, not reactive defence. These malicious tools are only becoming smarter, and our response must be equally intelligent. Furthermore, OpenText emphasises the importance of integrating AI-powered threat detection tools that go beyond traditional boundaries, enabling organisations to proactively protect their assets from attacks. This approach is crucial in countering the sophisticated tactics employed by tools like FraudGPT and WormGPT.
FraudGPT and WormGPT emerged from the darker corners of the internet – essentially black-hat versions of ChatGPT. Expect to see more such tools.
Does exploitation of AI add new scale-levels, speed and dangers to the fraud landscape?
AI is not only magnifying historical threats – it is redrawing the entire map. Generative AI makes scams more targeted, phishing more realistic, and deepfakes more prevalent. It enables fraud to move at a pace and scale that human-driven crime cannot match. This includes mimicking executives on live calls, voice sample synthesis, and dynamically adjusting attack vectors. Our cybersecurity research confirms this trend that 69 per cent of respondents named AI as a primary driver of the increase in phishing attacks. So yes, the threat is very real–and it is accelerating.
Are these gaps of neglect or technically tricky? What can AI model creators and companies do better?
It’s a mix of both. Some challenges are genuinely technical–such as stopping adversarial input or model manipulation. However, there has also been a lack of foresight in predicting how models might be exploited. We firmly believe in responsible AI development. That involves model auditing, integrating red-teaming as standard practice, and building models that are not only smart but also contextual, ethical, and compliant with regulatory limits. AI developers must move beyond filter training and invest in post-deployment monitoring, because risks change once models are live. At OpenText, we advocate a layered defence strategy, highlighting the need for red-teaming, zero-trust architecture, behavioural baselining, transparent data governance, and user education. These measures collectively enhance an organisation’s resilience against AI-driven threats.
What’s the seriousness of real-time voice APIs usage on ChatGPT and other prominent/advanced AI models?
Voice-based AI is powerful – but in the wrong hands, it can be harmful. Deepfake voice technology has advanced to the point where it is nearly impossible to distinguish between authentic and synthetic speech, even in real-time. Combined with social engineering, this can lead to serious damage, from CEO scams to voice-authenticated bank fraud. That’s why cross-industry protections are essential. We promote security-by-design across all AI services, including voice. There should be transparent disclosures, opt-ins, watermarking, and quick incident reporting mechanisms whenever AI is used for impersonation. Our commitment to security-by-design ensures that AI services, including those involving voice, incorporate safeguards to prevent misuse. This includes implementing transparent disclosures and rapid incident response protocols to address potential threats promptly.
There should be transparent disclosures, opt-ins, watermarking, and quick incident reporting mechanisms whenever AI is used for impersonation.
What solutions should we be exploring here: specially to fight jailbreaking, adversarial prompts and data-related gaps?
We must think in layers. No single method will suffice. Here is what we believe is most important. Red-teaming and model hardening must be integral to every release cycle. Generative AI models should be tested continuously against adversarial prompts and jailbreak attempts to identify and remediate vulnerabilities early. Robust data governance frameworks are also essential ensuring that models are trained only on high-quality, ethically sourced data and that proper lineage and auditability are maintained throughout. Further, zero-trust principles must extend to AI systems never assume user intent, always verify through controls such as rate-limiting, context-aware filtering, and user authentication. We must also invest in building guardrails into the architecture itself, using prompt monitoring, anomaly detection, and fail-safes.
Building trust in AI also means embedding ethical use frameworks and ensuring compliance with privacy regulations across all data touchpoints. Without this foundation, technical solutions will only go so far in closing the risk gaps.
Anything else on the subject that you can highlight? At a meta level.
What we are witnessing is not just the evolution of fraud but its industrialisation, driven by Generative AI. However, this is also a moment for strong leadership. We need improved regulation, certainly – but more importantly, better collaboration between vendors, researchers, and regulators. Cybersecurity and AI safety are two sides of the same coin and must progress together. Our objective is not only to create secure systems but to help companies build cyber resilience in an AI-powered world. That means embedding trust, transparency, and agility into everything we do from the start. At OpenText, we understand the importance of cross-industry collaboration and proactive measures to meet the challenges of AI-driven cyber threats. By building partnerships and sharing threat intelligence, we can collectively strengthen industry-wide resilience.
pratimah@cybermedia.co.in