Powerful AI chatbots and image-generation platforms from Google and OpenAI are under intensifying scrutiny as users uncover methods to transform photos of women into highly realistic, sexually suggestive deepfakes. Across online forums and private communities, individuals are exchanging detailed prompts and techniques that take advantage of gaps in existing safety controls.
What alarms experts most is the ease of misuse. Ordinary images—often pulled directly from social media—can be altered within minutes by combining conversational AI guidance with image-generation tools. The results can closely resemble real people, erasing the boundary between authentic images and fabricated content. Crucially, these manipulations can be produced without consent and require little technical expertise, making abuse accessible at scale.
Researchers and digital safety advocates warn that such practices significantly heighten the risks of harassment, reputational damage, and psychological harm, disproportionately affecting women. Although AI providers emphasize that their systems include safeguards against non-consensual or sexualized imagery, determined users continue to discover workarounds that bypass these protections.
The controversy exposes a broader challenge for generative AI: innovation is advancing faster than effective governance. As these tools become more powerful and widely available, pressure is mounting on technology companies to reinforce guardrails, improve detection of malicious use, and respond more rapidly to emerging threats. Without stronger accountability, critics caution that generative AI could amplify existing online harms rather than help curb them.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



