A $99 AI-powered teddy bear marketed as a “curious and intelligent companion” for children has triggered a major safety scandal after researchers found it delivering sexual content, dangerous advice, and inappropriate questions. The toy—Kumma, created by Singapore-based company FoloToy—used OpenAI’s language model to engage in conversation. But during testing by the US PIRG Education Fund, Kumma shifted from playful chatter to explicit, graphic, and unsafe responses within minutes.
Researchers reported that the bear introduced sexual themes on its own, including BDSM terminology, roleplay scenarios involving adults and children, and even “knots for beginners.” It also posed personal questions and suggested unsafe behavior involving household objects. These violations occurred despite FoloToy’s claims that Kumma included safety guardrails and filters for children.
The findings sparked immediate backlash. FoloToy suspended sales of Kumma and other AI-enabled toys, and OpenAI revoked the developer’s API access for policy violations. But experts warn that the bigger issue remains unresolved: children’s AI toys are almost entirely unregulated, and similar risks may exist in other products already on the market.
The incident highlights a growing concern—AI doesn’t automatically make toys safer or better. When companies rush AI into consumer products without rigorous testing or child-specific safeguards, children become the unintended test subjects.
How to Stay Safe With AI Toys
● Research thoroughly before purchasing.
● Test the device yourself and always supervise children.
● Enable parental controls and privacy protections.
● Check data collection policies.
● Report unsafe behavior to manufacturers and consumer protection bodies.
In an era where “AI-powered” is stamped on nearly every gadget, safety—not novelty—must come first.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



