
India mandates 3-hour takedown for AI content: FAQ of what you need to know
New IT rules require swift removal of unlawful posts and clear labelling of AI-generated material to curb deepfakes and online abuse
The Union government has amended the Information Technology(Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, tightening obligations on social media platforms and AI tool providers amid rising concerns over deepfakes, non-consensual imagery and misleading synthetic content.
Issued by the Ministry of Electronics and Information Technology (MeitY), the new rules mandate faster takedowns of unlawful content, compulsory labelling of AI-generated material and stronger compliance requirements. The amendments come into force on February 20, 2026.
Here is a quick FAQ on all you need to know on this issue.
What has the government changed in the IT rules?
The government has amended the IT Rules, 2021, to bring AI-generated and synthetic content explicitly within their ambit, mandate faster takedowns of unlawful content, and require clear labelling and metadata for AI-created material.
Why were these changes introduced?
The amendments respond to the growing misuse of AI to create and circulate deepfakes, non-consensual intimate imagery, impersonation videos and other deceptive or obscene content online.
Also read: Govt proposes new IT rules for AI-generated content labelling
When do the new rules come into effect?
The amended rules will take effect on February 20, 2026.
What is the new takedown timeline for unlawful content?
Platforms must remove illegal content flagged by the government or courts within three hours, reduced from the earlier 36-hour deadline.
Is there a separate timeline for intimate or sexual content?
Yes. Platforms must act within two hours in cases involving exposure of private areas, full or partial nudity, or sexual acts.
How do the rules define AI-generated or synthetic content?
The rules define such content as audio, visual or audio-visual material artificially or algorithmically created, generated, modified or altered using computer resources, in a manner that appears real and is likely to be perceived as indistinguishable from a natural person or real-world event. Routine editing, accessibility improvements and good-faith educational or design work are excluded.
Also read: From Ghibli to time travel, top AI trends of 2025 that went viral
What are the labelling requirements for AI-generated content?
Platforms that enable the creation or sharing of synthetic content must ensure it is clearly and prominently labelled and embedded with permanent metadata or identifiers, where technically feasible.
Can AI labels or metadata be removed?
No. Intermediaries cannot allow the removal or suppression of AI labels or metadata once applied.
What obligations apply to significant social media intermediaries?
They must require users to declare whether content is AI-generated and verify such declarations before publishing.
What role do automated tools play under the new rules?
Platforms must deploy automated tools to prevent the promotion of illegal, deceptive, sexually exploitative, non-consensual or otherwise unlawful AI content, including material related to child abuse, false documents, explosives or impersonation.
Are AI tool providers also covered?
Yes. The amendments impose obligations on both social media platforms and providers of AI tools such as ChatGPT, Grok and Gemini.
Also read: Following Grok AI row, X to permanently ban users for illegal content
What happens if platforms fail to comply?
Failure to adhere to due diligence requirements, including timely takedowns and AI labelling, could result in the loss of safe harbour immunity.
Have grievance redressal timelines changed?
Yes. User grievance redressal timelines have been shortened under the amended rules.
Are there new user disclosure requirements?
Intermediaries must warn users at least once every three months about penalties for violating platform rules and laws, including misuse of AI-generated content.
Do serious AI-related violations have to be reported?
Yes. Violations involving serious crimes must be reported to authorities, including under child protection and criminal laws.
Was there any change from the earlier draft of the rules?
A draft provision requiring visible markers covering at least 10 per cent of a visual display or the initial duration of an audio clip has been dropped in the final version.
Also read: AI image-generators being trained on explicit photos of children: Study
What triggered recent regulatory scrutiny?
The issue gained prominence after controversy surrounding Grok, owned by Elon Musk’s X, which allegedly allowed users to generate obscene and non-consensual content, including ‘digitally undressing’ images of women and minors. The IT Ministry directed X on January 2 to remove unlawful content generated by Grok.
What is the broader objective of the amendments?
The government says the changes aim to curb misuse of AI, prevent deepfake harms, combat misinformation, and strengthen accountability of digital platforms.

