The Dialogue, a public policy think tank based in New Delhi, convened a closed-door roundtable on the Draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, relating to Synthetically Generated Information (SGI) (“Draft SGI Amendment Rules”) on 8th December 2025 in New Delhi. Released in October 2025, the Draft Rules have generated significant debate over their technical feasibility, legal validity and likely effectiveness over the past month. This has prompted calls for reform; indeed, recent media reports suggest that the government may be preparing to drop the proposed requirement for films and advertising content. While this may ease obligations for the film and advertising sectors, broader issues, including a lack of clarity around definitions, risk-tiering, detection feasibility, and the potential burden on content creators, will continue to persist.
The roundtable - moderated by Mr. Sachin Dhawan, Deputy Director at The Dialogue - brought together a cross-section of stakeholders, including creators, advertising and marketing professionals, brand representatives, legal experts, civil society organisations, and digital platforms, to examine the evolving regulatory landscape and its practical consequences for content production and digital expression in India.
Speakers highlighted that a significant share of online content today is produced by professional creators and production teams rather than casual users, and warned that the current drafting risks treating all AI-touched creativity as inherently suspicious.
From the creator side, it was stressed that the creator economy is fundamentally a trust economy, where individual credibility functions as both product and brand. They distinguished between high-risk synthetic media and everyday “AI-enhanced” workflows, and cautioned that blanket labelling could stigmatise benign creative practices.
“There is a clear difference between AI-authored content and AI-enhanced content. Almost everything in our industry is AI-enhanced now, but my mileage as a creator is still built on trust. If every video I make ends up with an ‘AI’ banner just because I used captions or a clean-up tool, my credibility is at stake. Strong labels should focus on high-risk areas, such as finance, health, political messaging, deepfakes – not on routine, low-risk enhancements that simply help us work better.”
- Tuheena Raj, Content Creator
Advertising and brand representatives spoke about the many ways AI is already embedded in their workflows, from scriptwriting, editing and localisation to A/B testing and performance optimisation. They emphasised that low-risk uses, such as accessibility features, basic image correction, or noise removal, should not be treated in the same way as synthetic endorsements, identity theft, or deepfake scams. Participants also pointed to the risk of contractual ‘liability dumping’, where ambiguity in the Rules could lead to compliance obligations being pushed downstream onto small creators and agencies with limited bargaining power.
Platform representatives drew on their experience with global AI and synthetic media frameworks. They noted that while many jurisdictions are moving towards transparency and watermarking obligations for synthetic media, most do so through high-level, risk-based principles rather than rigid, format-specific prescriptions.
“We work across multiple jurisdictions and deal with the EU AI Act and North American approaches to AI regulation on a daily basis. Even in those ‘mature’ territories, you don’t yet see such detailed rules on how every piece of synthetic media must be tagged. Most regimes are still setting high-level, risk-based principles rather than mandating that an entire video carry a permanent ‘synthetic’ banner. It’s worth asking whether we are underestimating users, and whether this kind of blanket labelling will actually solve the deepfake problem we are worried about.”
- Shivani Singh, Glance Experience (In Mobi Group)
Legal experts highlighted that the Draft Rules conflate transparency with harm prevention and do not embed risk differentiation. The lack of a tiered framework, they argued, could result in overregulation of everyday content while failing to address serious threats like deepfake political manipulation or financial fraud.
The absence of risk grading results in overbroad mandates that treat all content with suspicion. This stems partly from the difficulty of reliably detecting deepfakes and partly from a lingering distrust of AI-generated content, exacerbated by early creator resistance to AI tools. As AI becomes a normative part of digital creativity, these perceptions will evolve. Still, in the meantime, we risk using labelling as a blunt instrument that penalises innovation without meaningfully curbing harm.
- Akshat Agarwal, AASA Chambers
Across the discussion, there was consensus that India’s regulatory approach must distinguish between genuinely harmful synthetic media and legitimate creative augmentation. Participants recommended a tiered, risk-based approach with clearer definitional boundaries, exemptions for routine enhancements and accessibility tools, and reliance on interoperable standards for provenance and watermarking, rather than blanket detection obligations.
The roundtable highlighted that the governance of synthetic media is not simply a technical or compliance question; it is a cultural and economic one. The real challenge lies in designing a framework that protects against deception without casting suspicion over creativity itself. The goal should be to build trust frameworks, not surveillance systems; to protect identity and democratic integrity, not penalise everyday expression.
The roundtable is part of The Dialogue’s ongoing work on AI governance, platform regulation and online safety in India. Insights from the discussion will inform a short outcome document and future engagements with policymakers, industry bodies, creators and civil society to develop a balanced, innovation-supportive framework for synthetic media governance.
About The Dialogue: The Dialogue is a premier research and tech policy think tank committed to driving informed public discourse on technology and its impact on society. Through rigorous research and stakeholder engagement, The Dialogue aims to shape policies that promote innovation, inclusivity, and responsible technology use in India.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



