Govt proposes new IT rules for AI-generated content labelling
Draft amendments mandate clear labels, visibility markers for deepfakes; increase accountability for platforms like Facebook and YouTube to curb misuse and misinformation
The government on Wednesday (October 22) proposed amendments to the IT Rules, requiring the clear labelling of AI-generated content and enhancing the accountability of major social media platforms such as Facebook and YouTube in identifying and flagging synthetic information, in a bid to curb user harm arising from deepfakes and misinformation.
Also Read: Inside AI’s pricing playbook: When the same product costs different for you and me
Rapid spread of deepfake audio, videos
The IT Ministry observed that the rapid spread of deepfake audio, videos and synthetic media on social platforms has revealed the capacity of generative AI to produce “convincing falsehoods”. Such content, it warned, could be “weaponised” to disseminate misinformation, tarnish reputations, manipulate or influence elections, or perpetrate financial fraud.
The proposed amendments to the IT Rules aim to establish a clear legal foundation for labelling, traceability, and accountability relating to synthetically generated information.
In addition to clearly defining what constitutes synthetically generated information, the draft amendment, open for stakeholder comments until November 6, 2025, requires labelling, visibility markers, and embedded metadata to distinguish synthetic or modified content from authentic media.
Social media accountability
The stricter rules would increase the responsibility of significant social media intermediaries (those with 50 lakh or more registered users) to verify and flag synthetic content through reasonable and appropriate technical measures.
Under the draft rules, platforms must label AI-generated content with prominent visual or audio markers, covering at least 10 per cent of the visual display or the first 10 per cent of an audio clip’s duration.
Furthermore, major social media platforms must obtain a user declaration specifying whether uploaded material is synthetically generated, implement reasonable and proportionate technical measures to verify such claims, and ensure that AI-generated content is clearly labelled or accompanied by an appropriate notice indicating the same.
Defining and distinguishing synthetic content
The draft rules further prohibit intermediaries from modifying, suppressing, or removing such labels or identifiers.
"In Parliament as well as many forums, there have been demands that something be done about deepfakes, which are harming society...people using some prominent person's image, which then affects their personal lives, and privacy...Steps we have taken aim to ensure that users get to know whether something is synthetic or real. It is important that users know what they are seeing," IT Minister Ashwini Vaishnaw said, adding that mandatory labelling and visibility will enable clear distinctions between synthetic and authentic content.
Once rules are finalised, any compliance failure could mean loss of the safe harbour clause enjoyed by large platforms.
Also Read: 'Disturbed' Dhanush lashes out at AI-altered ending of 'Ambikapathy' re-release
Growing threat of Deepfakes
With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information (deepfakes), the potential for misuse of such technologies to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly, the IT Ministry said.
Accordingly, the IT Ministry has prepared draft amendments to the IT Rules, 2021, with an aim to strengthen due diligence obligations for intermediaries, particularly significant social media intermediaries (SSMIs), as well as for platforms that enable the creation or modification of synthetically-generated content.
The draft introduces a new clause defining synthetically generated content as information that is artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.
A note by the IT Ministry said that globally, and in India, policymakers are increasingly concerned about fabricated or synthetic images, videos, and audio clips (deepfakes) that are indistinguishable from real content, and are being blatantly used to produce non-consensual intimate or obscene imagery, mislead the public with fabricated political or news content, commit fraud or impersonation for financial gain.
Also Read: Delhi High Court protects actor Hrithik Roshan's personality rights
Increasing usage of AI tools
The latest move assumes significance as India is among the top markets for global social media platforms, such as Facebook, WhatsApp and others.
A senior Meta official said last year that India has become the largest market for Meta AI usage. In August this year, OpenAI CEO Sam Altman had said that India, which is currently the second-largest market for the company, could soon become its largest globally.
Asked if the changed rules would also apply to content generated on OpenAI's Sora or Gemini, sources said in many cases, videos are generated but not circulated, but the obligation is triggered when a video is posted for dissemination. The onus in such a case would be on intermediaries who are displaying the media to the public and users who are hosting media on the platforms.
Legal cases against AI misuse
Over the treatment of AI content on messaging platforms like WhatsApp, PTI sources said that once it is brought to their notice, they will have to take steps to prevent its virality.
India has witnessed an alarming rise in AI-generated deepfakes, prompting court interventions. Most recent viral cases include misleading ads depicting Sadhguru's fake arrest, which the Delhi High Court ordered US digital giant Google to remove.
Earlier this month, Aishwarya Rai Bachchan and Abhishek Bachchan sued YouTube and Google in a lawsuit that seeks Rs 4 crore in damages over alleged AI deepfake videos.
(With agency inputs)



