
New AI Rules Aim to Protect Trust, Rights, and Democracy in the Age of Synthetic Media
Indian IT Minister Vaishnaw announces imminent ‘Techno-Legal’ regulations for Deep-Fakes. During the NDTV World Summit 2025 in New Delhi on October 18, Minister Ashwini Vaishnaw declared that India will “very soon bring out regulations on the deep-fakes” problem.
On October 18, 2025, during Day 2 of the NDTV World Summit 2025 in New Delhi (themed "Edge of the Unknown: Risk"), Union Minister for Electronics and Information Technology Ashwini Vaishnaw announced that India will introduce comprehensive regulations on deepfakes "very soon." This follows earlier advisories, such as the March 2024 Ministry of Electronics and IT (MeitY) guidelines under the IT Rules 2021, which urged platforms to label AI-generated content and mitigate harms, but the new framework promises more enforceable rules.
India at a Crossroads, with the increasing DeepFakes. The Minister emphasized the potential dual nature of Artificial Intelligence (AI): while it enables harmless novelty, it also has the power to “harm society in ways humans have never seen before.” This has set the stage for a critical divergence in India’s technology strategy, balancing rapid AI innovation with the urgent need to safeguard rights, identities, and democratic processes.
Innovation at Full Throttle, Regulation to Catch Up
Vaishnaw's announcement comes amid an aggressive governmental push to establish India as a global AI powerhouse.
· Infrastructure Buildout: The Minister revealed that India is supporting six major AI models, two of which are slated to use approximately 120 billion parameters and are designed to be “free from biases like Western models have.”
· Compute Power: To feed this development, two domestic semiconductor assembly units have commenced production.
· Investment Attraction: These moves underline a broader strategy, which includes attracting major investment, such as a $15 billion commitment by Google LLC for an AI facility in Visakhapatnam.
However, this high-speed rollout is running ahead of the country's existing legal apparatus. Legal frameworks like the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 are still being absorbed even as the risks of AI-enabled disinformation and deep-fakes—synthetic media that impersonate real people or create false scenarios—surge across digital platforms. Vaishnaw was clear: “Your face and your voice should not be used in a harmful way for society.”
The Minister's announcement signals an immediate need for action across various sectors:
· Technology Companies: Firms developing or deploying generative AI models must prioritize compliance and technical readiness. New obligations regarding provenance, transparency, and model accountability are likely on the horizon.
· Hosting Platforms: Platforms for user-generated media will likely be mandated to institute “red-flag” systems for synthetic content, potentially requiring historic version tracking or validated user identity pipelines.
· Civil Society and Media: Concerns are raised about regulatory over-reach, where litigation, liability, and enforcement might inadvertently chill creative expression or online debate.
As the regulatory machinery readies itself, central questions remain: What defines harmful deep-fake content? What level of transparency and verification must platforms institute? And how will the rights of expression and anonymity be balanced against fraud and manipulation?
This builds on 2023 promises of deepfake rules (e.g., watermarking mandates and global collaboration), which evolved into the 2024 advisory amid rising incidents like celebrity deepfake videos.
For the individual citizen, the message is a clear warning: your digital likeness can be weaponized. The coming regulation will be the ultimate test of whether India can successfully merge the agility of its innovation ecosystem with the crucial safeguards of a mature rule-of-law regime. The era of synthetic media governance has officially arrived.
Responding to this regulatory shift, Dr. Deepak Kumar Sahu, Founder and CEO of FaceOff Technologies Pvt. Ltd., emphasized the need for AI-driven trust infrastructure to counter growing misinformation threats. Made in India, Engineered for the World.
“At FaceOff, we aim to revolutionize trust verification using AI. Our Multimodal Fusion Platform, powered by the Adaptive Cognito Engine (ACE), integrates multiple AI systems for accurate detection and mitigation of generative AI threats,” said Dr. Sahu.
Dr. Sahu reiterated the company’s commitment to indigenous innovation and technological self-reliance. Every component of FaceOff’s technology—from AI model design to full-stack deployment—is developed in-house at its Delhi and Kolkata innovation hubs.
By combining multimodal analysis, behavioral intelligence, and biometric precision, FaceOff supports critical sectors such as fintech, compliance, and public governance—enabling transparen
The impending regulations, paired with India’s rapid AI development, could position the country as a global leader in responsible AI governance. With major elections ahead and digital engagement surging, the ability to regulate deepfakes is not just timely—it is essential.
The new framework may serve as a model for emerging economies, offering a balanced blueprint for managing synthetic media while promoting innovation.
HashTags To be used:
-
#DeepfakeRegulation
-
#DigitalGovernance
-
#AIethics
-
#GenerativeAI
-
#DeepTech
-
#MultimodalAI
-
#TrustTech
-
#MadeInIndia
-
#DigitalIndia
-
#IndiaTech
-
#AtmanirbharBharat
-
#StartupIndia
-
#InnovateIndia
-
#AIgovernance
-
#TrustInTech
-
#Misinformation
-
#DigitalTrust
-
#ResponsibleAI
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.