/dq/media/media_files/2025/12/26/ai-generated-videos-in-india1-2025-12-26-11-37-18.jpg)
AI-generated videos in India
The fact that Meta limited access to two AI-generated videos that are portraying Prime Minister Narendra Modi and Gautam Adani, was not just about content moderation, but also a wake-up call to the online future of India. The videos, which were uploaded by the Congress party, were taken down following takedown notices by Delhi Police under the information technology laws of India despite being ruled by Meta not to breach its own community standards. The relocation emphasises the increasing strength-and threat of artificial intelligence in the cybersecurity and political spheres.
The two AI-generated clips elicited controversy because of their realistic depiction of members of society in scenarios that may harm them. The takedown orders were taken seriously and promptly because Meta wanted to defend its safe harbour provisions, which cover platforms against liability in relation to user-created content. Disobeying these orders would put Meta in the direct guard of potential lawsuits, thus establishing precedent around the conduct of platforms that need to adjust to governmental pressure but not to free speech. India’s evolving AI governance framework aims to balance innovation with public safety. The move highlights the limits of Meta content moderation when government directives override platform policies.
According to a recent McAfee survey, 75% of Indians claimed they have encountered deepfake across the internet, and 22% of them had viewed a political deepfake of which they originally believed was authentic. Similar research established extensive concerns regarding AI-based frauds, with 64% of participants indicating that AI made it more difficult to recognise online frauds, and that only 30% of them were sure they could differentiate between real and AI-altered photographs. This data highlights the growing crisis of online misinformation in India. This points out to two main things:
- Massive exposure, low recognition capacity: Deepfakes have been strongly encountered among a significant part of the Indian populace and only a tiny segment can identify them consistently.
- Political AI content is already reaching citizens: More than one in five were exposed to political deepfakes, thus emphasizing the extent to which political deepfakes can rise to popular conversation in a short period of time.
The fact that three-quarters of Indians have already encountered deepfakes in India demonstrates that the technology has ceased to have a niche presence and has reached the mainstream. This confirms the reason why even the authorities, such as the Delhi Police, are concerned about AI content of political nature, despite the fact that even an internal review of the content produced as performed by the Meta did not label the material as a breach of the company internal rules.
Why does AI pose a cybersecurity threat especially for AI-Generated videos in India?
Artificial intelligence-driven deepfakes and fake media is making the barrier to disinformation campaigns more accessible, and scammers can more easily influence a population, steal the identity of leaders, and even disrupt democratic principles. AI-driven disinformation campaigns are becoming faster, cheaper, and harder to detect. Other than politics, cybercriminals are also applying AI in phishing, theft of identity and in automated malware distributions, all of which are overburdening conventional defenses. With the increased sophistication of the AI tools, the threats increase and sophisticated cybersecurity measures are necessary to identify and suppress the threats.
India is increasing its control involving AI-generated content. They have enacted new policies that now stipulate that platforms must label deepfakes and respond to tougher content takedown processes to enhance clarity and responsibility. The government is preparing to spend money on AI-driven detection systems to fight deepfakes and cyberattacks, taking a risk-based approach to AI governance. Nevertheless, the issue of innovation and security is a balance that is still difficult to achieve because the United States should not let AI be used without any control because it may leave the citizens and organisations more vulnerable to harm. This marks a significant shift in AI regulation in India, especially around political and synthetic content
India tightens AI content rules: Mandatory deepfake labelling and stricter platform accountability
Mandatory deepfake labelling is now central to India’s AI governance strategy. India is increasing its regulatory pressure on content produced by AI, with several new policies and investments in technology to combat the recent growth of deepfakes and cyberattacks. The government has recommended that labels on the AI-generated content are to be placed on the platform and the users are able to distinguish between synthetic media and there is less chance of misinformation. These labeling obligations are included in more general changes to the IT rules of India, that also implement more restrictive takedown processes on online platforms by forcing them to comply promptly with government-noticed potentially harmful material.
Indicatively, a recent change of rules means that social media companies are now obligated to actively determine and label deepfakes, or face penalties or a suspension of safe harbour provisions. This implies that platforms should currently invest in AI-based systems to scan deepfakes and other kinds of falsified media. Such systems are being funded by the government in an effort to develop a strong protection against AI-based disinformation and other cyber offenses.
The road ahead
Nevertheless, there is still a trade-off in the area of innovation and security. Though all these actions will promote clarity and accountability, too strict regulations may suppress the technological advancements and restrict the positive applications of AI. Researchers warn that the irresponsible implementation of AI, particularly in the areas with high stakes, may expose citizens and organisations to greater risk of suffering, such as identity theft, monetary fraud, and social control. Thus, the strategy of India cannot but develop further and make sure not to compromise between the security and innovation but to develop them both.
/dq/media/agency_attachments/UPxQAOdkwhCk8EYzqyvs.png)
Follow Us