India has officially rolled out new IT rules for AI-generated content, bringing deepfakes, synthetic media, and altered visuals under a clear legal framework for the first time. These changes aim to improve online safety, transparency, and accountability across digital platforms.
The updated rules took effect today, February 20, 2026, following a government notification issued earlier this month.
What Are the New IT Rules for AI-Generated Content?
The amendments update the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. They were notified on February 10, 2026, through Gazette Notification G.S.R. 120(E) by the Ministry of Electronics and Information Technology (MeitY).
For the first time, AI-generated and synthetically altered content is now formally regulated in India.
How the Government Defines “Synthetically Generated Content”
Under the new framework, synthetically generated information includes:
-
AI-created or AI-modified audio, video, or images
-
Deepfake videos
-
AI-generated voices
-
Face-swapped or digitally altered visuals
-
Content that appears real and can mislead users about people or events
This definition also applies to fictional scenarios involving real individuals if they look authentic.
Content That Is Exempt
Not all digital edits fall under these rules. The following are explicitly excluded:
-
Colour correction and noise reduction
-
File compression
-
Language translation
-
Accessibility enhancements
-
Conceptual or illustrative images used in documents, research papers, PDFs, or presentations
-
Drafts, templates, or hypothetical content
For example:
-
A stock AI illustration in an office presentation is allowed
-
A fake video of a politician giving a speech they never made is not allowed
Mandatory Labelling of AI-Generated Content
What Users Will Notice First
If you use platforms like Instagram, YouTube, or Facebook, you will now see clear labels on AI-generated content.
Any AI-created post, reel, video, or audio must display a visible marker before users can like, share, or forward it.
New Rules for Uploading Content
When uploading content, users may be asked to declare whether it was created or modified using AI.
Providing false information is no longer just a platform policy violation. Depending on the content, it may trigger legal action under:
-
Bharatiya Nyaya Sanhita (BNS), 2023
-
POCSO Act, in cases involving minors
Platforms must also remind users of this obligation every three months.
Permanent Markers and Traceability Requirements
All services that host or distribute AI-generated content must:
-
Embed permanent metadata and unique identifiers
-
Ensure these markers cannot be altered, hidden, or removed
-
Display the AI label directly on the content, not buried in settings or metadata
This closes earlier loopholes where labels could disappear after re-uploads.
Extra Compliance Burden for Large Platforms
Major digital platforms have additional responsibilities:
-
Mandatory AI-disclosure prompts before content is published
-
Automated tools to verify user claims about AI usage
-
Immediate action if unmarked AI content is detected
If a platform is found to knowingly hosting unlabelled AI-generated content, it can lose its legal safe harbour protection.
Changes to Visual and Audio Marker Rules
Earlier drafts required:
-
Visual markers covering at least 10% of the screen
-
Audio markers during the first 10% of playback
After industry feedback, the size requirement was removed.
However, clear and visible marking is still mandatory.
Faster Government Takedown Timelines
The new rules significantly tighten response deadlines:
-
Some government orders must now be followed within 3 hours (earlier 36 hours)
-
Other timelines reduced from 15 days to 7 days
-
Certain actions shortened from 24 hours to 12 hours
Mandatory Blocking of Illegal AI Content
Platforms must actively use automated detection tools to prevent and remove AI-generated content involving:
-
Child sexual abuse material
-
Obscene or sexually explicit content
-
Fake electronic records
-
Weapons or explosives
-
Deepfakes designed to misrepresent real people or events
Updated Legal References
All references to the Indian Penal Code (IPC) have been replaced with the Bharatiya Nyaya Sanhita, 2023, aligning digital rules with India’s updated criminal laws.
Compliance Timeline
-
Draft rules released: October 2025
-
Final notification issued: February 10, 2026
-
Rules effective from: February 20, 2026
What This Means for You
For everyday users, these rules mean:
-
More transparency around AI-generated content
-
Clear warnings before consuming or sharing synthetic media
-
Greater accountability for creators and platforms
-
Stronger protection against deepfakes and misinformation
India’s new IT rules mark a major step toward responsible AI use, digital trust, and safer social media in the age of artificial intelligence.
