OpenAI is rolling out an age prediction model across ChatGPT’s consumer plans, as part of broader efforts to better protect younger users on the platform. According to the company, the model uses a mix of account-level and behavioural signals to estimate whether an account belongs to someone under the age of 18. These signals include usage patterns over time, how long an account has existed, typical activity hours, and a user’s stated age.
If the system determines that a user is likely under 18, ChatGPT will automatically apply safeguards designed to limit exposure to sensitive material. This includes content related to self-harm and other topics considered inappropriate for minors.
The rollout follows a series of safety-focused updates from OpenAI, as the company faces growing scrutiny over how artificial intelligence tools affect children and teenagers.
OpenAI, along with other major tech firms, is currently under investigation by the US Federal Trade Commission over whether AI chatbots can negatively impact young users. The company has also been named in several wrongful death lawsuits, including one involving the suicide of a teenage boy, which has intensified pressure on OpenAI to strengthen its protections.
The age prediction system builds on earlier commitments made by OpenAI last year. In August, the company said it would introduce parental controls to help guardians understand and manage how their teenagers use ChatGPT. Those controls were rolled out the following month, alongside confirmation that OpenAI was developing an age detection system.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



