India has amended rules for social media platforms, requiring them to remove “unlawful content” within three hours of being notified by authorities and tightening scrutiny of deepfakes. The new rules bring deepfakes and other AI-generated impersonation content under the compliance framework, according to rules published yesterday, which will take effect on 20 February.
Why it matters: India’s increasingly stringent digital regulations require large technology companies to comply to maintain access to the world’s largest digital market. These regulations could serve as a model for MENA governments as they develop their own digital regulatory frameworks.
(** Tap or click the headline above to read this story with all of the links to our background and outside sources.)
Disclosure and labeling: Platforms that allow users to upload or share audio visual material must require users to disclose whether content is synthetically generated or AI generated. They must verify those disclosures and ensure that such content is prominently labeled. The amended rules bar certain categories of synthetic content, including “deceptive impersonation,” “non-consensual intimate imagery,” and material linked to serious crimes.
Takedown deadlines: An executive government order can determine “unlawful content” that platforms must take down within three hours, replacing an earlier 36-hour deadline. For certain urgent user complaints involving AI-generated impersonations, platforms are required to act within two hours.
Compliance burden: Platforms must deploy automated tools to verify, identify, and label deepfakes and to prevent the creation or circulation of prohibited synthetic content. Failure to comply, including when content is flagged by authorities or users, could put platforms’ safe-harbor protections under Indian law at risk.
Expert view
The three-hour takedown requirement is operationally feasible for large platforms, but will require higher investment in moderation systems, Ami Kumar, founder of Contrails AI, told EnterpriseAM. “It’s totally workable, it just requires more investment,” he said, adding that with current tools, enforcement can happen “under a second” once content has been flagged as unlawful.
Most of the technical requirements for detecting and labeling synthetic content are already mature. “For 90% of the AI generators implementation is possible and for 10% it is very difficult,” Kumar said, referring to edge cases such as re-recorded or re-captured content and outputs from new open-source models.
Counterview: “Obligations related to the use of automated tools effectively require intermediaries to actively monitor their services for any unlawful content, which risks making them the arbiters of online speech,” Shweta Venkatesan, a fellow at the Esya Centre, told us, adding that these norms could contravene the Supreme Court of India’s judgment, which held that platforms cannot determine the lawful nature of content on their services.
On the scope of the new prohibitions, the language in the rules could affect legitimate expression, Venkatesan noted. “This phrasing is broad enough to cover valuable forms of expression like satire, parody, or journalism analyzing real-world persons or events within its ambit. Thus, this prohibition may have a chilling effect on the right to freespeech,” she said.
Enforcement context: Meta restricted more than 28k pieces of content in India in 1H 2025 following government requests.