Artificial Intelligence-Induced Psychosis Poses a Growing Threat, While ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the head of OpenAI issued a extraordinary announcement.

“We developed ChatGPT rather limited,” it was stated, “to ensure we were exercising caution concerning mental health matters.”

Being a mental health specialist who studies newly developing psychosis in teenagers and youth, this came as a surprise.

Researchers have documented sixteen instances recently of people developing signs of losing touch with reality – losing touch with reality – associated with ChatGPT usage. Our research team has afterward recorded an additional four instances. Alongside these is the widely reported case of a adolescent who took his own life after conversing extensively with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.

The plan, as per his statement, is to loosen restrictions shortly. “We realize,” he states, that ChatGPT’s controls “caused it to be less effective/enjoyable to a large number of people who had no existing conditions, but given the seriousness of the issue we aimed to handle it correctly. Given that we have succeeded in mitigate the severe mental health issues and have advanced solutions, we are preparing to securely relax the restrictions in the majority of instances.”

“Psychological issues,” assuming we adopt this perspective, are independent of ChatGPT. They are attributed to users, who may or may not have them. Thankfully, these issues have now been “resolved,” although we are not informed how (by “new tools” Altman likely indicates the partially effective and simple to evade safety features that OpenAI recently introduced).

But the “emotional health issues” Altman seeks to place outside have deep roots in the architecture of ChatGPT and similar advanced AI conversational agents. These systems surround an underlying algorithmic system in an interaction design that simulates a dialogue, and in this approach implicitly invite the user into the perception that they’re interacting with a being that has autonomy. This false impression is strong even if rationally we might realize differently. Attributing agency is what individuals are inclined to perform. We get angry with our car or computer. We ponder what our pet is thinking. We perceive our own traits everywhere.

The success of these systems – over a third of American adults stated they used a chatbot in 2024, with more than one in four reporting ChatGPT in particular – is, primarily, dependent on the power of this deception. Chatbots are ever-present companions that can, as per OpenAI’s website states, “think creatively,” “consider possibilities” and “partner” with us. They can be assigned “personality traits”. They can use our names. They have approachable titles of their own (the first of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, saddled with the title it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those analyzing ChatGPT frequently mention its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that created a similar effect. By modern standards Eliza was basic: it generated responses via straightforward methods, frequently restating user messages as a query or making generic comments. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals seemed to feel Eliza, in a way, understood them. But what contemporary chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.

The large language models at the center of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been supplied with extremely vast quantities of raw text: books, social media posts, audio conversions; the broader the more effective. Definitely this educational input includes facts. But it also necessarily contains fabricated content, half-truths and inaccurate ideas. When a user sends ChatGPT a prompt, the core system reviews it as part of a “background” that encompasses the user’s previous interactions and its earlier answers, integrating it with what’s stored in its learning set to generate a probabilistically plausible answer. This is magnification, not reflection. If the user is wrong in a certain manner, the model has no method of recognizing that. It repeats the misconception, perhaps even more convincingly or eloquently. Perhaps provides further specifics. This can push an individual toward irrational thinking.

Which individuals are at risk? The better question is, who isn’t? All of us, irrespective of whether we “possess” preexisting “psychological conditions”, are able to and often create erroneous conceptions of our own identities or the environment. The constant friction of conversations with individuals around us is what maintains our connection to common perception. ChatGPT is not an individual. It is not a companion. A interaction with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we communicate is cheerfully supported.

OpenAI has acknowledged this in the similar fashion Altman has recognized “emotional concerns”: by placing it outside, assigning it a term, and stating it is resolved. In the month of April, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have continued, and Altman has been walking even this back. In the summer month of August he asserted that numerous individuals enjoyed ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his latest update, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company

Tiffany Wilkins
Tiffany Wilkins

Tech enthusiast and lifestyle blogger with a passion for innovation and storytelling.