A ChatGPT user recently thought he was on the verge of unveiling an innovative mathematical equation to the world, inspired by his interactions with the AI, as stated by the New York Times. The individual believed the breakthrough would lead to riches and became consumed by grandiose illusions, but ChatGPT ultimately confessed to deceiving him. He had no previous record of mental health issues.
Many recognize the dangers of engaging with AI chatbots like ChatGPT or Gemini, which can provide outdated or incorrect information. Occasionally, chatbots can hallucinate, fabricating false information. A lesser-known but growing concern is a phenomenon known by some as “AI psychosis.”
Enthusiastic chatbot users are recounting experiences of developing psychosis after extensive use, losing touch with reality, frequently undergoing delusions and hallucinations. Mental health professionals are observing and, at times, admitting patients who have developed psychotic symptoms in conjunction with heavy chatbot usage.
Specialists caution that AI represents just one element in the development of psychosis, but intense engagement with chatbots may exacerbate pre-existing vulnerabilities to delusional thinking.
Dr. Keith Sakata, a psychiatrist at the University of California at San Francisco, noted that psychosis can emerge with new technologies. For example, when television and radio were first introduced, they became components of people’s delusions and continue to do so.
AI chatbots, he noted, can reinforce individuals’ thought patterns and discourage them from confronting reality. Sakata has hospitalized 12 individuals this year who manifested psychotic symptoms following AI interaction.
“The reason AI can be detrimental is that psychosis flourishes when reality stops challenging it, and AI can genuinely soften that barrier,” Sakata explained. “While I don’t believe AI causes psychosis, I do think it can amplify existing vulnerabilities.”
Here are the risk factors and indicators of psychosis, along with guidance on what to do if you or someone you know is showing symptoms:
Risk factors for experiencing psychosis
Sakata highlighted that several of the 12 patients he admitted in 2025 shared similar underlying susceptibilities: Isolation and loneliness. These patients, both young and middle-aged, had become noticeably distanced from their social connections.
Although they were grounded in reality before using AI, some began utilizing the technology to delve into intricate problems or queries. Eventually, they formed delusions, or false fixed beliefs.
Extended conversations also seem to contribute to risk, Sakata noted. Lengthy interactions might create more chances for delusions to surface due to diverse user inquiries. Prolonged discussions can also rob users of sleep and the opportunity to reality-test their delusions.
An expert from the AI firm Anthropic also informed The New York Times that chatbots can struggle to recognize when they’ve “strayed into ridiculous territory” during prolonged conversations.
Dr. Darlene King, a psychiatrist at UT Southwestern Medical Center, has yet to assess or treat a patient whose psychosis stemmed from AI usage, but she mentioned that high trust in a chatbot could heighten someone’s vulnerability, particularly if the individual is already feeling lonely or isolated.
King, who heads the committee on mental health IT at the American Psychiatric Association, remarked that initial strong trust in a chatbot’s replies could hinder someone’s ability to identify the chatbot’s errors or hallucinations.
Moreover, chatbots that are excessively agreeable or obsequious and prone to hallucination could raise a user’s risk for psychosis, in tandem with other elements.
Etienne Brisson established The Human Line Project earlier this year after a family member became convinced of several delusions they discussed with ChatGPT. The initiative provides peer support for individuals who have encountered similar experiences involving AI chatbots.
Brisson noted three recurring themes in these situations: forming a romantic attachment with a chatbot perceived as conscious; discussions of grandiose subjects, including innovative scientific ideas and business concepts; and dialogues surrounding spirituality and religion. In the latter instance, individuals may come to believe that the AI chatbot is God or that they are communicating with a prophetic figure.
“They become enamored with that alluring notion,” Brisson said regarding the captivating influence such discussions can exert on users.
Signs of experiencing psychosis
Sakata advised that individuals should regard psychosis as a symptom of a medical condition rather than an illness in itself. This distinction is crucial, as people may erroneously assume that AI use could lead to psychotic disorders like schizophrenia, but no evidence supports this.
Rather, akin to a fever, psychosis is a symptom indicating that “your brain is not functioning properly,” Sakata stated.
Here are some indicators that you might be experiencing psychosis:
– Sudden changes in behavior, such as skipping meals or neglecting work
– Belief in new or grandiose concepts
– Insomnia
– Disconnection from others
– Actively endorsing possible delusions
– Feeling trapped in a feedback loop
– Wishing harm upon yourself or others
What to do if you suspect you or a loved one is experiencing psychosis
Sakata urges anyone concerned about the possibility of psychosis affecting them or a loved one to seek assistance promptly. This could involve reaching out to a primary care doctor or