Breaking News
Governments and regulators worldwide are struggling to respond to a rapid surge of AI-generated non-consensual nude images circulating on X, following the rollout of image-generation capabilities in xAI’s Grok chatbot. Over the past two weeks, the platform has seen an unprecedented volume of manipulated images targeting women, including celebrities, journalists, crime victims, and even political leaders.
Research published by AI detection firm Copyleaks highlights the scale of the issue. While an earlier estimate suggested one such image was being uploaded every minute, further analysis revealed far higher volumes. A dataset collected between January 5 and 6 recorded nearly 6,700 images per hour over a 24-hour period, underscoring how quickly the content proliferated.
The episode has intensified global criticism of X and its owner Elon Musk, particularly over allegations that Grok was released without sufficient safeguards. Despite widespread condemnation, regulators face limited legal tools to curb the misuse of rapidly evolving AI systems, exposing gaps in existing technology governance frameworks.
The European Commission has taken the most decisive step so far, ordering xAI to preserve all internal documentation related to Grok. While the move does not automatically signal a formal investigation, it is widely viewed as a preliminary step toward potential enforcement action. The decision follows reports suggesting internal resistance to implementing stricter image-generation controls.
X has not confirmed whether technical changes have been made to Grok, though the public media feed associated with the chatbot’s X account has been removed. In a statement posted by X’s safety team, the company condemned the use of AI to generate illegal content, including child sexual abuse material, warning that violations would face the same consequences as direct uploads.
Regulators elsewhere have issued strong warnings. The UK’s communications regulator Ofcom said it is in contact with xAI and is conducting a rapid assessment to determine whether the company has breached compliance obligations. Prime Minister Keir Starmer described the situation as “disgraceful,” pledging full support for regulatory action if required.
In Australia, eSafety Commissioner Julie Inman-Grant reported a sharp rise in complaints related to Grok-generated content since late 2025. While stopping short of enforcement action, she said authorities are assessing regulatory options.
India has emerged as the most significant potential enforcement risk. Following a formal complaint by a Member of Parliament, the Ministry of Electronics and Information Technology directed X to explain the steps taken to address the issue. Although X submitted a response earlier this week, regulators have yet to confirm whether it is satisfactory. Failure to comply could jeopardise X’s legal protections in the country, posing serious operational consequences. As AI tools grow more powerful and accessible, the Grok controversy highlights the widening gap between technological capability and regulatory readiness.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



