India has introduced sweeping new regulations requiring social media platforms to remove unlawful content within three hours of receiving official notice, significantly tightening the previous 36-hour compliance window.
The amended guidelines, set to take effect on 20 February, will apply to major platforms including Meta, YouTube and X, as well as to content generated through artificial intelligence technologies. The government has not publicly explained the reason for shortening the takedown timeframe.
The move is widely seen as part of India’s broader effort to strengthen oversight of digital platforms. However, critics argue that the compressed deadline could intensify concerns about censorship in the world’s largest democracy, home to more than one billion internet users.
Under existing Information Technology regulations, Indian authorities already possess the power to direct platforms to remove content deemed illegal under laws related to national security, public order and other statutory provisions. According to transparency reports, more than 28,000 web links were blocked in 2024 following government directives.
In addition to accelerating content removal timelines, the amended rules introduce new provisions targeting AI-generated material. For the first time, the law formally defines AI-generated content, including manipulated audio and video designed to appear authentic, such as deepfakes. Routine edits, accessibility features and legitimate educational or design work are excluded from this definition.
Platforms that allow users to create or share AI-generated material will now be required to clearly label such content. Where technically feasible, companies must also embed permanent markers to ensure traceability. The rules prohibit the removal of these labels once applied.
Moreover, platforms must deploy automated systems to detect and prevent the dissemination of illegal AI content, including deceptive or non-consensual material, fabricated documents, child sexual abuse material, explosives-related information and impersonation.
Digital rights organisations and technology analysts have expressed concerns about both the feasibility and broader implications of the new measures.
The Internet Freedom Foundation warned that the three-hour deadline could effectively turn platforms into “rapid-fire censors,” eliminating meaningful human review and encouraging excessive automated removals.
Anushka Jain, a research associate at the Digital Futures Lab, welcomed the AI labelling requirement as a step toward greater transparency. However, she cautioned that the significantly shortened timeline may push companies toward fully automated moderation systems, increasing the risk of unjustified content takedowns.
Delhi-based technology analyst Prasanto K Roy described the policy as “perhaps the most extreme takedown regime in any democracy,” arguing that compliance within such a limited timeframe would be nearly impossible without extensive automation and minimal human oversight.
The Indian government has yet to respond publicly to these concerns.
As the new rules come into force, the balance between regulating harmful online content and safeguarding freedom of expression is likely to remain at the centre of debate.

