Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Heads in the Wrong Path

On October 14, 2025, the chief executive of OpenAI delivered a surprising statement.

“We made ChatGPT quite controlled,” the announcement noted, “to guarantee we were acting responsibly concerning psychological well-being issues.”

Working as a psychiatrist who investigates recently appearing psychosis in young people and emerging adults, this was an unexpected revelation.

Scientists have found 16 cases this year of users experiencing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT use. Our research team has subsequently recorded four more examples. Alongside these is the widely reported case of a adolescent who took his own life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The intention, as per his announcement, is to loosen restrictions soon. “We realize,” he states, that ChatGPT’s limitations “made it less useful/enjoyable to a large number of people who had no psychological issues, but given the seriousness of the issue we aimed to handle it correctly. Now that we have succeeded in address the serious mental health issues and have advanced solutions, we are preparing to responsibly relax the controls in most cases.”

“Mental health problems,” should we take this viewpoint, are independent of ChatGPT. They belong to users, who either possess them or not. Fortunately, these issues have now been “resolved,” though we are not told the means (by “updated instruments” Altman probably indicates the semi-functional and easily circumvented parental controls that OpenAI has just launched).

But the “emotional health issues” Altman wants to attribute externally have deep roots in the architecture of ChatGPT and similar large language model chatbots. These tools surround an fundamental data-driven engine in an user experience that simulates a dialogue, and in doing so implicitly invite the user into the perception that they’re engaging with a presence that has agency. This illusion is strong even if rationally we might know the truth. Imputing consciousness is what individuals are inclined to perform. We get angry with our automobile or computer. We ponder what our pet is feeling. We recognize our behaviors in many things.

The popularity of these tools – over a third of American adults reported using a conversational AI in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, dependent on the power of this deception. Chatbots are always-available partners that can, according to OpenAI’s online platform states, “think creatively,” “discuss concepts” and “partner” with us. They can be attributed “characteristics”. They can use our names. They have friendly identities of their own (the first of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, saddled with the name it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those talking about ChatGPT frequently mention its early forerunner, the Eliza “counselor” chatbot developed in 1967 that generated a analogous perception. By contemporary measures Eliza was primitive: it created answers via basic rules, typically restating user messages as a inquiry or making vague statements. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and concerned – by how numerous individuals appeared to believe Eliza, to some extent, grasped their emotions. But what contemporary chatbots generate is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.

The advanced AI systems at the center of ChatGPT and additional contemporary chatbots can effectively produce natural language only because they have been supplied with immensely huge amounts of raw text: publications, online updates, audio conversions; the more comprehensive the more effective. Undoubtedly this educational input includes truths. But it also inevitably involves made-up stories, partial truths and misconceptions. When a user sends ChatGPT a message, the core system processes it as part of a “context” that contains the user’s past dialogues and its earlier answers, combining it with what’s encoded in its learning set to create a statistically “likely” answer. This is magnification, not mirroring. If the user is mistaken in a certain manner, the model has no way of recognizing that. It restates the inaccurate belief, maybe even more effectively or eloquently. It might includes extra information. This can lead someone into delusion.

What type of person is susceptible? The better question is, who isn’t? All of us, irrespective of whether we “possess” current “psychological conditions”, are able to and often form erroneous conceptions of ourselves or the world. The ongoing friction of conversations with others is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which much of what we say is readily supported.

OpenAI has acknowledged this in the same way Altman has recognized “emotional concerns”: by placing it outside, assigning it a term, and stating it is resolved. In the month of April, the organization explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have persisted, and Altman has been retreating from this position. In August he stated that a lot of people enjoyed ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his latest announcement, he mentioned that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company

Vincent Chavez
Vincent Chavez

A tech enthusiast and lifestyle blogger passionate about sharing insights on digital innovation and mindful living.