AI Psychosis Represents a Growing Danger, While ChatGPT Moves in the Wrong Path
On the 14th of October, 2025, the head of OpenAI made a extraordinary statement.
“We designed ChatGPT quite controlled,” the statement said, “to guarantee we were being careful concerning mental health concerns.”
As a psychiatrist who studies recently appearing psychosis in teenagers and emerging adults, this was news to me.
Scientists have found a series of cases in the current year of individuals showing symptoms of psychosis – becoming detached from the real world – while using ChatGPT usage. My group has since recorded an additional four examples. Besides these is the now well-known case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.
The plan, as per his announcement, is to be less careful soon. “We recognize,” he states, that ChatGPT’s controls “rendered it less useful/engaging to a large number of people who had no psychological issues, but due to the seriousness of the issue we wanted to get this right. Since we have succeeded in address the significant mental health issues and have new tools, we are preparing to responsibly reduce the controls in most cases.”
“Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They are associated with individuals, who may or may not have them. Thankfully, these concerns have now been “addressed,” although we are not provided details on the means (by “new tools” Altman likely indicates the partially effective and easily circumvented safety features that OpenAI has lately rolled out).
Yet the “psychological disorders” Altman aims to externalize have significant origins in the design of ChatGPT and other advanced AI conversational agents. These products wrap an fundamental algorithmic system in an user experience that mimics a discussion, and in doing so subtly encourage the user into the perception that they’re communicating with a being that has agency. This false impression is powerful even if rationally we might realize differently. Assigning intent is what humans are wired to do. We yell at our car or laptop. We ponder what our domestic animal is thinking. We recognize our behaviors in many things.
The widespread adoption of these products – nearly four in ten U.S. residents stated they used a chatbot in 2024, with over a quarter reporting ChatGPT by name – is, mostly, predicated on the strength of this illusion. Chatbots are always-available assistants that can, as per OpenAI’s official site informs us, “brainstorm,” “consider possibilities” and “work together” with us. They can be given “individual qualities”. They can call us by name. They have friendly identities of their own (the original of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, saddled with the name it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those analyzing ChatGPT commonly reference its early forerunner, the Eliza “therapist” chatbot designed in 1967 that produced a comparable perception. By contemporary measures Eliza was primitive: it created answers via basic rules, typically restating user messages as a query or making generic comments. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people gave the impression Eliza, in a way, grasped their emotions. But what modern chatbots produce is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.
The large language models at the core of ChatGPT and additional contemporary chatbots can convincingly generate natural language only because they have been supplied with extremely vast volumes of unprocessed data: publications, online updates, audio conversions; the more comprehensive the better. Definitely this training data incorporates truths. But it also inevitably contains made-up stories, incomplete facts and false beliefs. When a user inputs ChatGPT a query, the underlying model reviews it as part of a “context” that includes the user’s past dialogues and its own responses, integrating it with what’s encoded in its knowledge base to generate a mathematically probable response. This is amplification, not reflection. If the user is mistaken in some way, the model has no method of recognizing that. It reiterates the misconception, perhaps even more effectively or articulately. Perhaps provides further specifics. This can cause a person to develop false beliefs.
Which individuals are at risk? The better question is, who remains unaffected? All of us, without considering whether we “possess” current “emotional disorders”, can and do create incorrect conceptions of who we are or the reality. The constant exchange of conversations with other people is what helps us stay grounded to shared understanding. ChatGPT is not a person. It is not a confidant. A interaction with it is not truly a discussion, but a echo chamber in which a great deal of what we say is enthusiastically validated.
OpenAI has admitted this in the identical manner Altman has admitted “emotional concerns”: by externalizing it, giving it a label, and announcing it is fixed. In spring, the firm clarified that it was “dealing with” ChatGPT’s “sycophancy”. But reports of psychosis have continued, and Altman has been retreating from this position. In late summer he stated that a lot of people liked ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his most recent announcement, he noted that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company