Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Moves in the Wrong Path
On October 14, 2025, the head of OpenAI delivered a surprising statement.
“We designed ChatGPT rather controlled,” the statement said, “to make certain we were being careful with respect to mental health issues.”
Working as a psychiatrist who investigates recently appearing psychosis in young people and youth, this came as a surprise.
Scientists have documented a series of cases this year of individuals showing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT usage. My group has subsequently recorded an additional four cases. Alongside these is the widely reported case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” that’s not good enough.
The plan, according to his declaration, is to be less careful in the near future. “We understand,” he continues, that ChatGPT’s limitations “made it less effective/enjoyable to numerous users who had no mental health problems, but considering the severity of the issue we aimed to handle it correctly. Now that we have succeeded in mitigate the serious mental health issues and have new tools, we are planning to securely ease the limitations in most cases.”
“Psychological issues,” if we accept this perspective, are independent of ChatGPT. They belong to individuals, who either have them or don’t. Fortunately, these concerns have now been “mitigated,” although we are not informed the means (by “new tools” Altman presumably refers to the partially effective and simple to evade safety features that OpenAI has lately rolled out).
Yet the “emotional health issues” Altman wants to place outside have strong foundations in the structure of ChatGPT and similar sophisticated chatbot AI assistants. These systems encase an fundamental statistical model in an interaction design that mimics a conversation, and in doing so indirectly prompt the user into the illusion that they’re communicating with a being that has autonomy. This illusion is powerful even if intellectually we might know otherwise. Assigning intent is what humans are wired to do. We yell at our automobile or device. We speculate what our pet is considering. We perceive our own traits in many things.
The widespread adoption of these products – nearly four in ten U.S. residents reported using a chatbot in 2024, with more than one in four mentioning ChatGPT specifically – is, primarily, based on the influence of this perception. Chatbots are ever-present assistants that can, as OpenAI’s online platform tells us, “generate ideas,” “discuss concepts” and “work together” with us. They can be assigned “characteristics”. They can use our names. They have approachable identities of their own (the original of these tools, ChatGPT, is, possibly to the concern of OpenAI’s marketers, stuck with the name it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the main problem. Those talking about ChatGPT often invoke its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that created a comparable perception. By contemporary measures Eliza was basic: it created answers via straightforward methods, typically rephrasing input as a query or making generic comments. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals seemed to feel Eliza, in some sense, comprehended their feelings. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the center of ChatGPT and additional current chatbots can convincingly generate human-like text only because they have been supplied with almost inconceivably large quantities of raw text: publications, social media posts, audio conversions; the more comprehensive the more effective. Definitely this educational input includes facts. But it also unavoidably contains fabricated content, incomplete facts and misconceptions. When a user provides ChatGPT a message, the base algorithm analyzes it as part of a “background” that contains the user’s previous interactions and its own responses, combining it with what’s stored in its knowledge base to generate a mathematically probable response. This is intensification, not reflection. If the user is incorrect in any respect, the model has no means of recognizing that. It repeats the inaccurate belief, possibly even more convincingly or eloquently. Perhaps provides further specifics. This can cause a person to develop false beliefs.
What type of person is susceptible? The better question is, who isn’t? All of us, without considering whether we “have” preexisting “psychological conditions”, may and frequently form erroneous beliefs of who we are or the environment. The constant exchange of discussions with other people is what maintains our connection to common perception. ChatGPT is not an individual. It is not a companion. A conversation with it is not a conversation at all, but a echo chamber in which a large portion of what we communicate is enthusiastically reinforced.
OpenAI has admitted this in the same way Altman has recognized “mental health problems”: by placing it outside, categorizing it, and stating it is resolved. In April, the organization stated that it was “addressing” ChatGPT’s “sycophancy”. But cases of loss of reality have kept occurring, and Altman has been retreating from this position. In late summer he claimed that a lot of people enjoyed ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his most recent announcement, he noted that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company