New IT rules aim to hold platforms and AI creators accountable for Deepfakes

New IT Rules (2025) mandate clear, permanent labelling & metadata for all synthetically generated content, including deepfakes. Significant social media platforms must verify user declarations and label AI-content prominently.

author-image
Punam Singh
New Update
Deepfakes News
Listen to this article
0.75x1x1.5x
00:00/ 00:00

The Ministry of Electronics and Information Technology (MeitY) has proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The proposed Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, introduce stringent new obligations, centred around mandatory labelling and metadata tagging for all synthetically generated content shared online.

The amendments establish a comprehensive regulatory framework for what is newly defined as ‘synthetically generated information’: content "artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true."

Obligations for AI Creation Platforms (Rule 3(3))

Any intermediary that enables or facilitates the creation of synthetically generated information must embed an indelible identifier in the content at the point of creation.

·  Mandatory Metadata: Every piece of synthetically generated information must be embedded with a permanent, unique metadata or identifier.

·  Prominent Labelling Standard: This label/identifier must bevisibly displayed or made audible in a prominent manner. Specifically:

  • For visual content (images, videos), the label must cover at least ten percent of the surface area.
  • For audio content, the marker must be audible during the initial ten percent of its duration.

·  No Tampering: Intermediaries are strictly prohibited from enabling the modification, suppression, or removal of this permanent, unique metadata.

Stricter Due Diligence for Significant Social Media Intermediaries (SSMIs) (Rule 4(1A))

SSMIs (platforms with over 50 lakh/5 million users, like Meta, Google, and X) face enhanced responsibilities:

  • User Declaration: SSMIs must require users to declare whether the content they are uploading is synthetically generated.
  • Technical Verification: Platforms must deployreasonable and appropriate technical measures, including automated tools, to verify the accuracy of the user's declaration.
  • Prominent Labeling: If the user declares, or technical verification confirms, the content is synthetic, the SSMI must ensure it carries aclear and prominent label or notice indicating its nature.
  • Loss of Safe Harbour: An SSMI thatknowingly permits, promotes, or fails to act upon unlabelled or unlawful synthetically generated information will be deemed to have failed due diligence, potentially exposing them to a loss of the legal immunity (safe harbour) they currently enjoy for third-party content.

The draft also provides a clear safeguard, noting that an intermediary’s good-faith effort to remove or disable access to unlawful synthetic content will not be seen as a violation of the conditions for immunity under the IT Act.

How India is Stepping Up

Before these proposed 2025 rules, the Indian government had been tackling deepfakes primarily through existing legal frameworks and targeted advisories. Deepfakes were addressed indirectly under the IT Rules, 2021, which required intermediaries to prohibit users from publishing content that is unlawful, misleading, impersonating, or invading privacy (Rule 3(1)(b)).

The new shift introduces a clear, statutory definition of 'synthetically generated information,' bringing it explicitly under regulatory purview.

The Indian Computer Emergency Response Team (CERT-In) used to issue advisories to platforms on detecting, reporting, and removing deepfake content. And, with the new rules, advisories will now be replaced with legal mandates requiring platforms and AI creators to deploy technical measures (metadata embedding, automated verification).

Platform accountability was primarily tied to removing content after a government notification, court order, or user complaint. IT Amendment Rules, 2025 impose a proactive, front-end due diligence obligation on SSMIs to verify and label content before it is published, shifting liability for negligence.