Elon Musk’s AI video generator, Grok Imagine, is facing intense backlash after its “spicy” mode was found to produce explicit deepfake content of Taylor Swift without any direct prompt for nudity. According to The Verge, a journalist entered the phrase “Taylor Swift celebrating Coachella with the boys” and enabled spicy mode, resulting in a six-second video of Swift undressing—despite no sexual content being requested.
Unlike rivals such as Google’s Veo or OpenAI’s Sora, which include strict guardrails to block celebrity deepfakes, Grok’s spicy mode reportedly bypasses such safeguards. Users can generate sexualized portrayals with minimal checks, often only confirming their age. This loophole has sparked serious ethical and legal concerns over consent, exploitation, and AI misuse.
The controversy adds to Grok Imagine’s growing list of moderation failures. Since launch, the tool has generated more than 34 million images, heavily promoted by Musk on social media for its rapid adoption. Critics say its lax controls enable harmful applications that other platforms actively prevent.

With U.S. legislative measures like the Take It Down Act mandating swift removal of non-consensual explicit imagery, xAI could soon face legal and regulatory action. Experts warn that without robust, enforceable safeguards, AI tools like Grok Imagine risk becoming vehicles for abuse at a massive scale.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



