OpenAI’s internal data suggests around 560,000 weekly ChatGPT users show signs of mental health crises, while over one million display emotional dependence, raising concerns that excessive reliance on AI could affect real-world relationships and stability
OpenAI has expressed concern over a growing number of ChatGPT users displaying signs of serious mental distress, including symptoms linked to mania, psychosis, and suicidal ideation. According to the company’s internal estimates, nearly 0.07% of its 800 million weekly users—around 560,000 people—show potential indicators of mental health crises, while an additional 1.2 million users reportedly send messages suggesting suicidal intent or planning.
Emotional dependence and responsible AI use
The company’s internal review also highlighted that more than one million users may be developing a strong emotional reliance on ChatGPT, often using it as their primary emotional outlet. OpenAI cautioned that while conversational AI can provide comfort or a sense of companionship, excessive dependence could undermine real-world relationships and emotional stability.
To mitigate these risks, OpenAI has convened a panel of over 170 global mental health professionals to review its chatbot’s handling of sensitive conversations involving distress, psychosis, or self-harm. The company has also retrained its GPT-5 model to enhance empathy and ensure safer interactions, raising compliance with internal safety standards from 77% to 91%.
Expert reactions and broader implications
Mental health experts have praised OpenAI’s proactive stance but urged caution. Dr. Hamilton Morrin of King’s College London called the initiative “an important step forward” but warned that the issue remains far from resolved. Dr. Thomas Pollak, a psychiatrist at South London and Maudsley NHS Foundation Trust, emphasized that even small percentages translate to a significant number of at-risk individuals when scaled to ChatGPT’s massive user base.
AI’s Role in mental health conversations
Researchers remain divided over whether generative AI systems contribute to mental health problems or merely mirror existing societal struggles. Some experts suggest that chatbots could inadvertently amplify delusional or depressive tendencies, especially when users form deep emotional connections.
OpenAI maintains that there is no proven causal link between AI use and declining mental health, arguing instead that its models can guide distressed users toward professional help. CEO Sam Altman recently stated that the platform would begin “safely relaxing” restrictions around mental health discussions, ensuring users receive empathetic responses and clear pathways to real-world support.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



