
As India navigates the aftermath of the 2024 General Elections and heads deeper into 2025, the threat posed by deepfakes—AI-generated synthetic media—has evolved from a fringe tech novelty into a mainstream societal and cybersecurity concern. From political propaganda to non-consensual intimate content, deepfakes are challenging the very foundations of truth in India’s digital landscape.
What Are Deepfakes?
Deepfakes use AI models, particularly deep learning and generative adversarial networks (GANs), to create hyper-realistic audio, video, or images of people saying or doing things they never actually did. With the rise of free and accessible tools like ElevenLabs for voice cloning and open-source deepfake platforms, creating convincing manipulations now requires little technical expertise.
The Scope of the Crisis in India
📈 Rise in Deepfake-Linked Crimes
According to India’s Cyber Crime Coordination Centre (I4C), deepfake-related complaints surged over 300% between early 2023 and late 2024. In April 2025, a Press Information Bureau (PIB) update noted that CERT-In issued multiple advisories on deepfake threats, especially concerning the elections.
🔥 Notable 2024–2025 Cases
- Political Manipulations: During the 2024 General Elections, deepfake videos of major politicians, including fake apologies from national leaders, spread across WhatsApp and Instagram.
- Celebrity Exploits: Actress Rashmika Mandanna’s manipulated explicit video sparked nationwide outrage, spotlighting the non-consensual deepfake epidemic.
- AI Robocalls: Over 50 million voice-cloned robocalls were reported during the election season, according to cybersecurity firm Pindrop.
- Women Targeted: Platforms like Civitai have been accused of hosting thousands of non-consensual explicit deepfake models featuring Indian women, many of them non-celebrities.
Legal Framework in India: Where Do We Stand?
✅ Existing Rules
- IT Rules (2021): Platforms must remove “manipulated media” within 24 hours of receiving a complaint under Rule 3(1)(b).
- Grievance Appellate Committees (GACs): Established to address user grievances regarding content takedown.
⚠️ Legal Gaps
- No Dedicated Deepfake Law: India still lacks a targeted legal framework like the US “Take It Down Act” (2025), which federally criminalizes non-consensual deepfakes.
- Outdated IPC/IT Act Sections: Current provisions (e.g., Sections 66C, 66D, 67A) do not fully account for the speed, anonymity, and viral nature of deepfakes.
- Ongoing Consultations: MeitY submitted a status report to the Delhi High Court in March 2025, highlighting the need for AI-specific legislation and mandatory content disclosure norms.
Government Response: Strengths & Limitations
🛡️ Institutional Efforts
- CERT-In: Issues public advisories and monitors digital threats.
- I4C (MHA): Operates the cybercrime.gov.in portal and issues takedown notices to platforms.
- IndiaAI Mission: Aims to develop indigenous AI tools, including deepfake detection capabilities.
🚧 Challenges
- Arms Race: Deepfake creation is evolving faster than detection.
- Cross-Border Issues: Many perpetrators are outside India’s jurisdiction.
- Moderation Limits: Human review can’t keep up with the scale of uploads.
- Digital Literacy: A major divide persists in citizens’ ability to detect fakes.
Detecting and Combating Deepfakes: What Works?
🔍 AI-Powered Detection
Modern AI models detect inconsistencies in:
- Facial micro-expressions and eye movement
- Audio mismatches (lip sync vs. voice)
- Lighting, background artifacts, and pixel distortions
Accops and Pi-Labs (2025) have partnered to deploy these in government authentication systems.
🏛️ Platform Accountability
- Platforms face increasing pressure to:
- Proactively moderate AI-generated content
- Label manipulated media
- Respond to takedown requests within legal timelines
💡 Solutions in Progress
- Watermarking: Still experimental, but content provenance standards are being explored globally.
- Digital Literacy: Awareness campaigns in multiple languages are being conducted by NGOs and cybersecurity volunteers.
- Fact-Checkers: Independent media watchdogs like Alt News and Boom Live are crucial during elections.
What Students, Voters, and Citizens Should Know
- Always verify the source of viral media.
- Report suspected deepfakes on cybercrime.gov.in.
- Use media literacy tools and fact-checking plugins.
- Respect privacy and do not share manipulated content, even in jest.
Conclusion: A Battle of Truth vs. Technology
India’s deepfake challenge in 2025 isn’t just about technology—it’s about trust. While the tools for detection are improving, the onus also lies on platforms, policymakers, and the public. Without proactive legislation, ethical tech development, and widespread awareness, deepfakes could erode public discourse, amplify misinformation, and cause lasting societal harm.
AI-generated media is here to stay. The question is whether India can build the safeguards fast enough to keep democracy, privacy, and dignity intact.
Sources & References
- Press Information Bureau (MeitY Updates, April 2025)
- CERT-In Deepfake Advisories
- Cyber Crime Reporting Portal
- Livemint: “Voice Cloning and Deepfakes in 2024 Elections”
- Deccan Herald: “India’s Deepfake Dilemma: Celebrities Targeted”
- Newsmeter (May 2025): “Civitai and the Rise of NCII in India”
- Pindrop Security: 2025 Threat Landscape Report
- Chambers Global: India AI Policy 2025
- The Hindu, The Indian Express, The Wire – for election-related deepfake reporting
About the Author
The Pulsewire Editorial Team specializes in cybersecurity, AI policy, digital forensics, and media analysis within the Indian context. Our goal is to provide responsible, fact-checked insights into emerging tech threats affecting everyday citizens