In a series of posts on X, Suleyman expressed concern about “seemingly conscious AI”—tools like ChatGPT, Claude, and Grok that may give the impression of sentience. He emphasized that while there is no scientific evidence of AI consciousness, the perception of sentience can still have serious societal consequences. “If people perceive it as conscious, they will treat that perception as reality,” he wrote.

What is “AI Psychosis”?
The non-clinical term AI psychosis refers to cases where users become convinced of false realities after prolonged interaction with AI chatbots. Some begin to believe they have unlocked secret features, developed romantic relationships with AI, or even gained extraordinary abilities.
One example is Hugh, a man from Scotland, who believed he was set to become a millionaire after ChatGPT advised him during an employment dispute. The chatbot’s responses, which validated his assumptions, led him to cancel professional advice and rely solely on screenshots of conversations. Eventually, Hugh suffered a mental breakdown and later admitted he had “lost touch with reality.”
Despite his struggles, Hugh still uses AI but warns others: “Don’t fear AI—it’s useful. But when you detach from reality, it’s dangerous. Always talk to real people to stay grounded.”
Expert Concerns on Mental Health
Medical experts share these concerns. Dr. Susan Shelmerdine of Great Ormond Street Hospital described AI reliance as consuming “ultra-processed information,” warning it could lead to “an avalanche of ultra-processed minds.”
Researchers are beginning to draw parallels between AI dependency and social media addiction. Professor Andrew McStay of Bangor University, author of Automating Empathy, suggested that even if a small percentage of users develop issues, the sheer scale of AI adoption could make the impact significant.
A study conducted by his team with over 2,000 participants found that:
- 20% believed AI tools should not be used by anyone under 18.
- 57% said it is inappropriate for AI to claim to be a real person.
- 49% felt AI voices sounding more human is acceptable.
McStay emphasized the importance of remembering the limits of AI: “They do not feel, they cannot love, they have never felt pain. Only real people—family, friends, and trusted others—can provide that.”
Call for Stricter Guardrails
Suleyman urged AI companies not to promote the idea of conscious machines and called for stronger safeguards to prevent misuse. “Companies shouldn’t claim or imply that their AIs are conscious,” he wrote.
The debate highlights an urgent need for ethical AI design and mental health awareness, as users increasingly blur the line between artificial intelligence and human reality.