Context:
Artificial Intelligence (AI) has changed the way people create and share information online. It can write, paint, speak, and even mimic humans. But this same power has also created a dangerous problem — deepfakes. These are videos, images, or audio clips that look completely real but are actually fake, made using AI. They can easily mislead people and spread false information.
The New Proposal:
The proposed rules are India’s first strong step against deepfake misuse. The main points are:
-
- User Declaration: Every user must declare whether the content they upload is “synthetically generated information.”
- Mandatory Labelling: Platforms must ensure such content is clearly marked with a label or permanent digital tag.
- Label Requirements:
- For videos or images — label must cover at least 10% of the surface area.
- For audio — label must be heard during the first 10% of the duration.
- For videos or images — label must cover at least 10% of the surface area.
- Verification by Platforms: Social media platforms must use technical tools to check if users’ declarations are correct.
- Accountability: If platforms fail to verify or label such content, they may lose legal protection and be held responsible for the AI-generated content uploaded by users.
- User Declaration: Every user must declare whether the content they upload is “synthetically generated information.”
What Are Deepfakes?
Deepfakes are pieces of media — videos, photos, or audio — that seem real but are actually made or changed using deep learning, a branch of artificial intelligence.
-
- How They Work: AI models learn from thousands of real images or recordings and then use this data to replace faces, copy voices, or change actions, creating very realistic results.
- Where They Are Used:
- Entertainment: For creating movie scenes or visual effects.
- E-commerce: For virtual clothing trials or digital ads.
- Communication: For translating speech or generating voiceovers.
- Entertainment: For creating movie scenes or visual effects.
- How They Work: AI models learn from thousands of real images or recordings and then use this data to replace faces, copy voices, or change actions, creating very realistic results.
Why India Needed Such a Regulation?
-
- The danger of deepfakes became clear in 2023 when a fake video of actor Rashmika Mandanna went viral. It showed her entering an elevator, but the entire clip was generated using AI. The video spread widely before being debunked, sparking public outrage.
- Prime Minister Narendra Modi later warned that deepfakes pose a new “crisis” for society. Following this, many actors such as Amitabh Bachchan, Aishwarya Rai Bachchan, Akshay Kumar, and Hrithik Roshan have filed legal cases to protect their personality rights — that is, their name, image, and voice.
- However, India does not have specific laws that recognise personality rights. Protection comes indirectly through a mix of laws like the Information Technology Act and the Copyright Act, which do not fully address AI-created impersonations. This gap makes legal protection weak and inconsistent.
- The danger of deepfakes became clear in 2023 when a fake video of actor Rashmika Mandanna went viral. It showed her entering an elevator, but the entire clip was generated using AI. The video spread widely before being debunked, sparking public outrage.
What Other Countries Are Doing?
-
- European Union (EU): The AI Act makes it compulsory to label any content — image, video, audio, or text — created or changed using AI. The labels must be machine-readable so that anyone can detect whether the content is artificial.
- China: Introduced strict AI labelling rules that require all AI-generated material to display visible symbols or watermarks. Platforms must monitor AI content and alert users when deepfakes are detected.
- Denmark: Is planning a new law to give citizens copyright over their own likeness, meaning they can demand removal of any deepfake created without their consent.
- United States: As of 2024, 23 U.S. states have passed laws against deepfakes, especially targeting fake political content, misinformation, and non-consensual sexual videos
- European Union (EU): The AI Act makes it compulsory to label any content — image, video, audio, or text — created or changed using AI. The labels must be machine-readable so that anyone can detect whether the content is artificial.
The Current Legal and Institutional Framework in India:
Although India has no specific deepfake law yet, some existing laws and institutions help in tackling such problems:
1. Information Technology Act, 2000
This Act governs digital communication in India. It applies to content made using AI and provides a legal framework for prosecuting cybercrimes, though it doesn’t mention deepfakes specifically.
2. Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
These rules regulate social media platforms and online publishers. They require companies to set up grievance mechanisms and remove harmful or misleading content when reported.
3. CERT-In (Indian Computer Emergency Response Team)
CERT-In monitors cyber threats, issues advisories, and runs the Cyber Swachhta Kendra — a centre for cleaning and analysing malware and botnets.
4. Indian Cyber Crime Coordination Centre (I4C)
I4C coordinates with law enforcement agencies to handle cybercrimes and operates the National Cyber Crime Reporting Portal, along with the helpline number 1930.
These institutions form India’s basic defence against AI-related cyber risks, but their approach is still fragmented and largely reactive — responding after harm has occurred.
Concerns Associated with Deepfakes:
-
- Threat to National Security: Fake videos can provoke violence, spread misinformation, or harm diplomatic relations. Manipulated political videos or speeches can mislead voters and damage democratic credibility.
- Cyberbullying and Reputation Damage: False images or clips can ruin a person’s public image or mental health. Studies show that 90–95% of deepfakes online involve non-consensual sexual content, often targeting women.
- Identity Theft: AI tools can be used to create fake identification or impersonate real people online.
- Public Unawareness: Even when deepfakes are exposed, many people continue to believe them, increasing the spread of misinformation.
- High Cost of Detection: Detecting deepfakes requires huge data, advanced computing systems, and expert algorithms — which are costly and complex.
- Threat to National Security: Fake videos can provoke violence, spread misinformation, or harm diplomatic relations. Manipulated political videos or speeches can mislead voters and damage democratic credibility.
Industry Efforts to Label AI Content:
Some major technology companies have already started taking steps to identify and label AI-generated content:
-
- Meta (Instagram, Facebook): Uses the “AI Info” label for content created or modified with AI tools.
- YouTube: Uses a label called “Altered or Synthetic Content” and provides information on how the video was made.
- Collaboration Efforts: Companies like Meta, Google, Microsoft, Adobe, and OpenAI are working together through the Partnership on AI (PAI) to develop shared standards and invisible watermarks for identifying AI-generated material.
- Meta (Instagram, Facebook): Uses the “AI Info” label for content created or modified with AI tools.
However, these measures are often reactive. Labels are usually added only after users or authorities flag suspicious content. India’s draft rules aim to make the process proactive, requiring verification and labelling before such content is published.
The Way Forward:
1. Strengthen the Legal Framework
India should move from reactive to preventive action. A new law should clearly define offences, responsibilities, and penalties related to deepfakes and AI misuse.
2. Build Institutional Capacity
Dedicated agencies should be equipped with skilled professionals and technology to detect and remove deepfakes in real time.
3. Use Advanced Technology
Adopt AI tools and algorithms that can detect deepfakes based on context and metadata. For instance, MIT’s Detect Fakes project helps users learn to identify fake videos by noticing small visual details.
4. Promote Cyber Literacy
People should be trained to think critically before trusting what they see online. Awareness campaigns in schools and colleges can help build this digital literacy.
5. Strengthen Collaboration
Government, technology companies, law enforcement, and civil society must work together to set strong procedural guidelines and penalties for misuse.
Conclusion:
Deepfakes represent one of the most serious digital threats of our time. They blur the line between truth and falsehood, damaging trust, privacy, and security. India’s proposal to make labelling of AI-generated content mandatory is a welcome step toward building transparency and accountability online.
| UPSC/PCS Main Question: India’s existing legal framework on cyber governance is largely reactive rather than preventive. In light of the recent draft amendments to regulate AI-generated deepfakes, examine the need for a proactive and comprehensive legal architecture to tackle emerging digital threats. |
