
Mustafa Suleyman cautioned that people are increasingly forming deep emotional attachments to AI systems, creating a “psychosis risk,” as users misinterpret convincing, human-like responses from large language models as evidence of consciousness, affecting even those without prior mental health conditions
Mustafa Suleyman, Chief Executive of Microsoft AI, has cautioned that the artificial intelligence industry must avoid portraying advanced models as sentient or conscious beings. In a recent blog post, he warned that such framing could foster dangerous human dependencies and psychological risks.
Concern over emotional attachments
Suleyman highlighted a growing trend of individuals forming deep emotional connections with AI systems, in some cases leading to what he described as a “psychosis risk.” He stressed that these incidents were not limited to people with pre-existing mental health challenges but could affect broader user groups.
“My central concern is that many people will start believing in the illusion of conscious AI to the extent that they may soon demand rights, welfare measures, or even citizenship for these systems. That would represent a troubling diversion in the trajectory of AI development,” Suleyman wrote.
He referred to this phenomenon as “Seemingly Conscious AI,” describing it as the misperception that large language models or conversational systems are thinking entities, simply because they produce convincing, human-like responses.
Call for guardrails
To counter this risk, Suleyman urged AI developers and technology leaders to collaborate on clear safeguards and guidelines. These guardrails, he said, would be particularly important for “AI companions,” a new category of systems designed for personal interaction. He added that if users begin treating AI as conscious, companies must actively refute that belief.
The debate over emotional attachments to AI has intensified in recent months. Following the launch of GPT-5 earlier this month, OpenAI faced backlash after deprecating GPT-4o, prompting protests from users who admitted to forming bonds with the earlier model. The company eventually reinstated the system after strong public reaction.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.