AI three-hour takedown rule: When speed becomes the censor

Legal Lens | Over time, the call for speedy action could produce systematic over-removal of digital content, and lawful speech may be caught in the dragnet


New IT rules, lawful content
x
Compressing the takedown time from 36 hours to 3 hours amounts to a 12-fold reduction, fundamentally altering the conditions under which platforms make decisions on content. Image: iStock
Click the Play button to hear this message in audio format

The Union government yesterday (February 10) notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules. One of the changes that may appear minor but could have far-reaching consequences is the compliance window for certain takedown directions — it has been reduced from 36 hours to three hours.

Compressing 36 hours to three hours amounts to a 12-fold reduction in response time. It fundamentally alters the conditions under which platforms make decisions about speech.

When the deadline applies

The three-hour deadline is triggered when intermediaries receive “actual knowledge” of unlawful content through legally recognised routes — typically a court order or a government-authorised notice under the Rules. It does not automatically apply to routine user complaints.

Also read: India mandates 3-hour takedown for AI content: FAQ of what you need to know

That distinction matters. But, within this framework, the range of content that may be targeted remains wide. Government directions and court orders can concern clearly illegal material — but they can also involve politically sensitive, contested or context-dependent speech.

Three hours may be sufficient for manifestly unlawful content. But it is rarely enough for nuance.

Consider a plausible scenario. A local journalist uploads a video alleging irregularities during a civic poll. A takedown direction arrives at 9 pm. By midnight, the platform must comply. There is little time to verify context, assess whether limited restriction would suffice, or even contact the uploader. The video is disabled first. Any review or clarification comes later — if at all.

In today’s digital news cycle, those lost hours can mean the story never regains traction.

Beyond deepfakes

The amendment arrives amid growing anxiety about AI-generated and synthetic media. Deepfakes can cause real harm, particularly in electoral contexts. Responding to such threats is a legitimate regulatory objective.

But the three-hour rule is not confined to deepfakes or other narrowly defined urgent harms such as non-consensual intimate imagery or child sexual abuse material. It operates as a general compression of the takedown timeline. Once a qualifying direction is issued, the same deadline applies across categories.

In today’s digital news cycle, those lost hours can mean the story never regains traction.

A rule justified by fast-moving harms thus becomes a structural acceleration mechanism for all state-backed takedown directions.

The question is whether every category of allegedly unlawful speech demands emergency treatment.

Compliance over context

India’s intermediary liability regime is built around a conditional safe harbour. Platforms are protected from liability for user content so long as they comply with due diligence requirements, including acting on valid takedown directions.

Time limits are central to this arrangement. The shorter the deadline, the stronger the incentive to prioritise legal safety over substantive evaluation.

Also read: ‘What happens if everything is allowed?’: SC flags risks of unregulated online content

With only three hours to comply, the safest institutional response for platforms — especially those with significant legal exposure — is straightforward: disable access immediately and examine the details later. This logic is about risk management. Faced with a hard deadline and potential liability, precautionary removal becomes the rational choice.

Time limits are central to this arrangement. The shorter the deadline, the stronger the incentive to prioritise legal safety over substantive evaluation.

Over time, that dynamic can produce systematic over-removal. Lawful speech may be caught in the dragnet, because the regulatory design encourages erring on the side of deletion.

For citizens, journalists and political actors, even temporary disappearance can have lasting effects. Online discourse moves quickly. A story taken down during its peak moment may never recover its impact, even if restored days later.

Speed takes priority

The government has previously emphasised procedural safeguards within the takedown process — including clearer authorisation and review mechanisms. Such measures are important.

Also read: 18 OTT platforms blocked for publishing obscene, vulgar content: Govt

But safeguards depend on time. If compliance must occur within three hours, internal legal escalation and contextual review become difficult in complex cases. The removal happens first. Any contestation or reconsideration follows.

Under Article 19(1)(a) of the Constitution, restrictions on speech must be reasonable and proportionate. Proportionality requires a balance between the objective pursued and the means used to achieve it.

A regime that predictably incentivises precautionary removal — even in contested cases — may invite scrutiny on whether that balance has shifted too far toward restriction. Speed, in other words, shapes outcomes.

Structural impact

There is also a less discussed dimension: capacity.

Large platforms can maintain round-the-clock compliance teams and in-house legal departments in India. Smaller intermediaries and start-ups often cannot. For them, a three-hour clock may be far more burdensome.

Also read: Centre amends IT Act, limits content removal to senior officials

The result could be indirect consolidation. Firms unable to sustain 24x7 legal triage may adopt aggressively restrictive moderation policies or avoid hosting certain kinds of user-generated content altogether. Some may exit the market.

Regulation designed to discipline dominant platforms can, in practice, entrench them.

The case for calibration

None of this is to deny that some forms of content require urgent intervention. Child sexual abuse material, incitement to imminent violence, non-consensual intimate imagery, or election-related deepfakes can justify compressed timelines. Delay in such cases can magnify harm.

But applying the same three-hour deadline across all categories of state-backed takedown directions is blunt. A more calibrated framework could distinguish between manifestly unlawful content and contested speech, combine urgency with mandatory specificity in notices, and ensure rapid user notification and transparency.

Also read: 'Rage bait' named Oxford Word of the Year for 2025; beats 'aura farming' and 'biohack'

Without such differentiation, speed itself becomes the primary regulatory tool. Replacing “36 hours” with “three hours” may appear to be a small textual amendment. In practice, it restructures the incentives that govern online speech in India.

Deepfakes and synthetic manipulation are real threats. Addressing them requires responsiveness. But when the law accelerates suppression without equally accelerating safeguards, it risks creating a system that is efficient for compliance — and less forgiving of lawful expression.

That trade-off deserves careful public debate.

Next Story