MANDATORY AI LABELLING
- The Indian government has proposed draft rules mandating AI-generated content labelling on social media platforms like YouTube and Instagram to curb the spread of deepfakes and synthetic media.
- Under the amendments to the IT Rules, 2021, platforms must:
- Ask users to declare whether uploaded content is AI-generated.
- Ensure such content carries permanent labels or metadata identifiers.
- For videos, the label must cover 10% of the screen area; for audio, it must play during the first 10% of duration.
- The move follows rising concerns over digitally altered videos, such as the 2023 Rashmika Mandanna deepfake, which went viral and prompted Prime Minister Narendra Modi to warn that deepfakes pose a new national crisis.
DRAFT RULES FOR LABELLING AI CONTENT
- The proposed IT Rules amendments require social media platforms to:
- Ask users to declare if uploaded material is AI-generated or synthetic.
- Use automated tools to verify such declarations.
- Clearly label confirmed AI-generated content with a visible notice.
- If platforms fail to comply, they risk losing legal immunity under safe harbour provisions, making them liable for unverified or unlabeled synthetic content.
- The draft also defines “synthetically generated information” as content created, altered, or modified using computer algorithms in a way that appears authentic or real.
EXISTING AI LABELLING PRACTICES
- Platforms like Meta and Google already use AI content labels, asking creators to declare if content was made or modified using AI.
- Instagram tags such posts with an “AI Info” label, though enforcement is inconsistent.
- YouTube adds an “Altered or Synthetic Content” tag, explaining how AI influenced the video’s creation.
- Meta has also partnered with other tech firms under the Partnership on AI (PAI) to develop common identification standards and tools to detect invisible AI markers across platforms like Google, OpenAI, Adobe, and Midjourney.
- However, these global measures are mostly reactive, activated after content is flagged.
- In contrast, India’s proposed rules require companies to proactively verify and label AI-generated content using automated detection tools, even without user notification.
THE BOLLYWOOD CONNECTION
- The rise of AI-generated deepfakes has alarmed India’s entertainment industry, prompting stars like Amitabh Bachchan, Aishwarya Rai, Akshay Kumar, and Hrithik Roshan to take legal action to protect their personality rights — their likeness, voice, and image.
- Experts note that India lacks explicit legal protection for personality rights, relying instead on a patchwork of existing laws for limited safeguards.
- The issue gained attention after the production company behind Raanjhanaa reportedly used AI to alter the film’s ending without the consent of its director and actors.
- This highlighted the urgent need for clearer legal frameworks against deepfakes in Indian entertainment.
WHAT ARE PERSONALITY RIGHTS?
- Personality rights refer to the right of a person to protect his/her personality under the right to privacy or property.
- These rights are important to celebrities as their names, photographs or even voices can easily be misused in various advertisements by different companies to boost their sales.
- Therefore, it is necessary for renowned personalities/celebrities to register their names to save their personality rights.
- A large list of unique personal attributes contribute to the making of a celebrity.
- All of these attributes need to be protected, such as name, nickname, stage name, picture, likeness, image and any identifiable personal property, such as a distinctive race car.
AI LABELLING IN OTHER COUNTRIES
- Countries worldwide are introducing AI labelling and protection laws to curb the spread of deepfakes and synthetic media.
- European Union: The AI Act mandates that all AI-generated or altered content — including text, audio, video, and images — must be clearly labelled in a machine-readable format. Entities using AI for public-interest content must also disclose artificial generation or alteration.
- China: Recently enforced AI labelling rules require visible markers for chatbots, voice synthesis, face swaps, and scene editing. Hidden watermarks can be used for other AI content. Platforms must detect, label, and alert users about AI-generated material.
- Denmark: Proposes giving citizens copyright ownership over their likeness, allowing them to demand removal of any digitally altered content created without consent — a pioneering move in personal image rights protection.
Note: Connect with Vajirao & Reddy Institute to keep yourself updated with latest UPSC Current Affairs in English.
Note: We upload Current Affairs Except Sunday.