India has recently introduced mandatory AI labeling rules for visual content, requiring at least 10% of an image’s area to display a disclaimer—one of the world’s strictest regulations. The aim is to combat surging deepfake incidents and growing costs of AI-infused digital fraud, particularly as women in India face escalating risk from manipulated online imagery.
Yet, industry detection experts warn that these labels are easily bypassed. Ken Jon Miyachi, CEO of BitMind—a global deepfake detection company serving over 100,000 users, the majority from India—notes that watermarks and labels alone offer little deterrence. “Simple techniques like screenshots or basic cropping remove the evidence, rendering the existing watermarking rules ineffective against sophisticated manipulation,” Miyachi explains.
Miyachi stresses that just mandating AI disclaimers isn’t enough; robust enforcement and technological upgrades are essential. With deepfakes now threatening public figures and ordinary citizens alike, India urgently needs detection solutions such as advanced AI image forensics, hash-based verification, and tamper-evident systems. These tools enable not just identification but rapid flagging of synthetic media across platforms.
He points out that India’s approach appears more stringent than recent guidelines in the US and EU, which prioritize optional labeling and platform-level moderation. However, without effective penalty frameworks and truly persistent, machine-readable watermarks, the effectiveness of India’s rules remains questionable.
Ken Miyachi calls for India to step up public education and real-time reporting systems, alongside fostering collaboration between regulators, tech companies, and detection startups. “Indian users need accessible mobile tools and browser plugins today—not just new rules—if the country hopes to stay ahead of fast-evolving AI threats.”
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



