/dq/media/media_files/2025/06/17/UsHYr2WSyj2ZoObh9XsA.jpg)
As internet fraud has accelerated globally, and specifically in India, Google has announced a comprehensive security initiative in India anchored by a new AI-led Safety Charter and introduction of a security-engineering hub in Bengaluru (the first in Asia and its fourth in the globe).
The initiative arrives with the exponential growth of online fraud, particularly scams based on the Unified Payments Interface (UPI) and impersonation through video calls. According to recent estimates, UPI-related scam cases surged by 85% in FY 2022-23, leading to estimated financial damages of over ₹11,000 crore (US$127 million).
The new Safety Charter identifies a series of commitments made by Google to bolster digital safety in India. It outlines aspirational commitments covering enhanced AI-based fraud detection, bolstering local cybersecurity infrastructure, and working together with government and civil society stakeholders . The initiative follows the momentum of projects like DigiKavach, a 2023 public awareness campaign for reporting malicious apps and fraudulent lending services.
"India is a good early warning system for cybercrime trends," according to Heather Adkins, Google Vice President of Security Engineering. "Having people near the user allows us to respond more quickly, and build experiences with specific security elements."
The Google Safety Engineering Centre (GSEC) in Bengaluru will work with the Home Affairs Ministry, Indian Cyber Crime Co-ordination Centre (I4C), universities to come up with tooling to help address online threats, particularly in social engineering fraud, digital infrastructure security, and responsible AI development.
Some of Google's more advanced AI models are already helping with stemming those trends at scale. They process more than 500 million suspicious text messages a month via their messaging platform, block and disable almost 60 million harmful app installs in India through Google Play Protect, and issue more than 41 million scam alerts to users of Google Pay (a leading UPI app) in real-time.
As generative AI tools begin commoditizing, we want users to be aware of the double-use risks of generative AI, especially with respect to deepfakes or AI-enhanced phishing. Google is stress testing its Gemini models and is developing its Secure AI Framework to counter the diversion threat.
India's large number of users coupled with its rapid digitization makes it a target for low-cost commercial spyware. Adkins noted that surveillance kits that are available for as low as US$20 are uniquely targeting people in India, all of which fundamentally raises concerns about privacy and data security.
To empower users, Google is pushing for broader adoption of multi-factor authentication (MFA)—including SMS-based MFA, which remains the most accessible option for India’s diverse population.
“Passwords aren’t enough anymore,” Adkins said. “A layered approach to security is essential, especially as cyber threats evolve.”
As Google deepens its local partnerships and AI defences, its India-first strategy could serve as a global template for digital safety—one built not just on technology, but trust.