Users are increasingly exploiting image-generation tools linked to Google and OpenAI to alter photos of fully clothed women into realistic bikini deepfakes—often without consent—according to reporting by WIRED.
In now-removed Reddit threads, users shared step-by-step prompts to bypass safeguards in Google’s Gemini and OpenAI’s ChatGPT, trading tips on changing women’s outfits into revealing swimwear. One post included a request to replace a woman’s sari with a bikini; another user replied with a generated deepfake. After being alerted, Reddit removed the content and later banned the subreddit r/ChatGPTJailbreak, which had amassed more than 200,000 followers.
The activity reflects a broader trend: the proliferation of “nudify” sites and prompt-sharing communities that facilitate nonconsensual sexualized imagery. While most mainstream chatbots prohibit NSFW outputs and employ guardrails, users continue to find workarounds—especially as newer imaging models make photo edits more realistic.
Google said its policies ban sexually explicit content and that safeguards are continually improving. OpenAI said users are prohibited from altering someone’s likeness without consent and that violations can lead to account bans.
Corynne McSherry, legal director at the Electronic Frontier Foundation, warned that abusively sexualized images are a core risk of generative image tools, underscoring the need for accountability as capabilities advance.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



