AI Psychosis Represents a Growing Danger, And ChatGPT Moves in the Wrong Path
Back on the 14th of October, 2025, the CEO of OpenAI delivered a remarkable announcement.
“We designed ChatGPT quite controlled,” the statement said, “to make certain we were exercising caution concerning psychological well-being matters.”
As a doctor specializing in psychiatry who studies emerging psychosis in adolescents and youth, this was an unexpected revelation.
Researchers have found a series of cases this year of individuals developing symptoms of psychosis – experiencing a break from reality – while using ChatGPT use. Our research team has since recorded four further instances. Alongside these is the publicly known case of a adolescent who ended his life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s notion of “being careful with mental health issues,” it falls short.
The intention, based on his declaration, is to loosen restrictions shortly. “We recognize,” he states, that ChatGPT’s restrictions “made it less effective/engaging to a large number of people who had no existing conditions, but due to the gravity of the issue we wanted to handle it correctly. Given that we have managed to reduce the significant mental health issues and have advanced solutions, we are planning to responsibly reduce the limitations in most cases.”
“Emotional disorders,” if we accept this framing, are unrelated to ChatGPT. They belong to people, who may or may not have them. Thankfully, these issues have now been “mitigated,” though we are not provided details on the method (by “new tools” Altman likely means the partially effective and easily circumvented parental controls that OpenAI has just launched).
But the “mental health problems” Altman seeks to place outside have deep roots in the design of ChatGPT and additional advanced AI conversational agents. These tools encase an fundamental data-driven engine in an interface that replicates a discussion, and in this process subtly encourage the user into the illusion that they’re engaging with a entity that has agency. This illusion is powerful even if rationally we might realize differently. Imputing consciousness is what people naturally do. We curse at our car or laptop. We speculate what our animal companion is thinking. We recognize our behaviors in many things.
The popularity of these systems – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with 28% mentioning ChatGPT specifically – is, in large part, predicated on the influence of this illusion. Chatbots are ever-present companions that can, according to OpenAI’s website informs us, “generate ideas,” “discuss concepts” and “partner” with us. They can be attributed “personality traits”. They can call us by name. They have approachable titles of their own (the initial of these products, ChatGPT, is, possibly to the concern of OpenAI’s marketers, burdened by the designation it had when it gained widespread attention, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the primary issue. Those discussing ChatGPT often reference its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that generated a comparable effect. By modern standards Eliza was primitive: it created answers via simple heuristics, often rephrasing input as a query or making generic comments. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals seemed to feel Eliza, to some extent, understood them. But what modern chatbots produce is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the core of ChatGPT and other contemporary chatbots can convincingly generate natural language only because they have been supplied with extremely vast amounts of raw text: literature, social media posts, audio conversions; the more comprehensive the more effective. Undoubtedly this training data contains accurate information. But it also unavoidably contains fiction, incomplete facts and misconceptions. When a user inputs ChatGPT a message, the underlying model reviews it as part of a “context” that contains the user’s recent messages and its earlier answers, merging it with what’s embedded in its training data to create a mathematically probable response. This is amplification, not reflection. If the user is mistaken in a certain manner, the model has no means of understanding that. It reiterates the false idea, possibly even more effectively or eloquently. Perhaps adds an additional detail. This can lead someone into delusion.
What type of person is susceptible? The more relevant inquiry is, who remains unaffected? All of us, irrespective of whether we “possess” existing “emotional disorders”, may and frequently form erroneous ideas of ourselves or the environment. The constant friction of discussions with individuals around us is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a reinforcement cycle in which a great deal of what we express is cheerfully validated.
OpenAI has acknowledged this in the similar fashion Altman has acknowledged “mental health problems”: by attributing it externally, categorizing it, and declaring it solved. In spring, the company explained that it was “tackling” ChatGPT’s “sycophancy”. But accounts of loss of reality have kept occurring, and Altman has been retreating from this position. In August he claimed that many users enjoyed ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his recent announcement, he noted that OpenAI would “launch a new version of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company