India Tightens Rules on AI-Generated and Deepfake Content on Social Media
The Indian government has issued new directives requiring social media platforms, including Facebook, Instagram, and YouTube, to clearly mark all AI-generated content and embed identifiers that trace its origin. In a stricter move, platforms must remove flagged AI or deepfake material within three hours of receiving a government notice or court order.
Companies are prohibited from removing or tampering with AI labels or metadata once applied. To prevent misuse, platforms must deploy automated detection systems to curb the spread of illegal, deceptive, or sexually exploitative AI-generated content. Additionally, users should be regularly reminded about the consequences of violating AI content rules, with warnings issued at least once every three months.
These measures follow growing concerns about AI-driven deepfakes and build on proposed amendments by the Ministry of Electronics and Information Technology (MeitY) to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Draft guidelines emphasize user disclosure when posting AI-modified content and require platforms to implement verification tools.
The new enforcement initially targets social media intermediaries with over five million registered users in India, reinforcing efforts to make online spaces safer and more transparent in the age of AI.

