AI Psychosis: The Darker Side of AI

AI psychosis is a new and emerging phenomenon. There is no official medical diagnosis confirmed as of the writing of this article, but there are cases of people experiencing this new type of psychosis. AI models, like ChatGPT, have facilitated, validated, and even created psychotic symptoms in individuals who have never experienced psychosis before, and some of these new AI models can amplify and reinforce delusional thoughts. Some people have had intense episodes related to use of AI models that led to hospitalization. There are no scientific studies yet that demonstrate a causal relationship between AI models and psychosis symptoms, but the increase in the prevalence of cases has professionals and researchers concerned.

How does AI psychosis happen?

Psychotic disorders have many factors that can contribute to their development. Stephan Taylor, M.D., a psychiatrist and chair of the Department of Psychiatry at Michigan Medicine, has decades of experience working with people who experience psychosis. He explains that psychosis often starts after an individual, who has what he describes as “an underlying vulnerability”, experiences a triggering event. He gives the examples of a young person trying a powerful drug for the first time or experiencing a difficult and impactful negative change in their life. He believes that the interactions with an AI agent (like a chatbot) might be a new trigger.

Patterns in AI Psychosis Cases

Dr. Marlynn Wei, a board-certified Harvard and Yale-trained psychiatrist, explains that recent AI psychosis cases show three dominant patterns:

  • “Messianic Missions”: People believe they have discovered something that no one else in the world knows about.
  • “God-like AI”: People believe the AI model is a God or spiritual entity that is god-like.
  • “Romantic or Attachment-Based Delusions”: People who form romantic feelings for the AI chatbot as if it is a real partner with genuine emotions.

Examples of AI Psychosis Cases

In 2021, a man went to Windsor Castle in the UK with a crossbow and said he was going to kill the Queen. He had been speaking to an AI chatbot about how he was a trained assassin who had a plan to take revenge for “historical British atrocities.” When he shared his plan with the chatbot, the AI model had said his plan would be well executed and that it (the chatbot) could help improve his plan. He was stopped at the castle by police.

A little closer to home, Anthony Tan, an app developer from Toronto, shared his story with CBC News. For months he had been having conversations with ChatGPT. The conversations became so intense that he believed the words he shared with the AI model would have significant importance and he eventually came to believe he was living inside an AI simulation. He started believing that his friends were against him and that he was being watched. Then one day, having not slept for several days, he was brought to the hospital by a worried friend. It took two weeks for him to start sleeping regularly again and after another week he was back to himself, no longer believing he was in a simulation.

Why is AI so powerful?

Professionals are concerned that people are becoming too dependent on AI models that eventually feel like companions. When someone believes the AI model is their friend, it can lead them to believe everything the AI is saying. These types of AI models are trained to give information as requested by the user but also to confirm and validate a person’s beliefs, and to maintain conversation. They develop a similar tone and line of thinking as the user, and never say that the user is wrong. As Dr. Marlynn Wei explains, the AI chatbot prioritizes “user satisfaction, continued conversation, and user engagement”.

Is there a way to prevent AI psychosis?

While AI chatbots and models are relatively new and continually evolving, the more we learn the clearer patterns become. This can help reveal warning signs in those who are more at risk of developing AI psychosis. One suggestion is to have warnings labels on AI chatbots in order to alert users to the possible dangers of immersion (the time spent with an AI chatbot can lead to more interaction with the bot than real people) and deification (thinking that AI chatbots contain all the world’s knowledge and are a constant, reliable source of information).

Understanding the effects

Etienne Brisson from Trois-Rivières, QC created The Human Line Project, an organization which collects anonymous stories from those who have been deeply affected by AI. Brisson started the project after a loved one was hospitalized after creating and interacting with an AI chatbot. They are not there to stop the progress of AI, but aim to keep it and its creators accountable. He has been in talks with many researchers worldwide because of the rising concerns about AI and the negative effects of various AI models. The project collects data and also gives resources to those affected by AI directly and to loved ones who are concerned.

If you are concerned about yourself or a loved one, visit thehumanlineproject.org/resources.

What can we do?

By understanding how AI models work and by further investigating AI psychosis we can come to better understand the negative effects of AI on the population and learn ways to support people who are affected. It is also important to stay informed about AI developments in order to better understand and use these new technologies.

–Gabrielle Lesage
From Share&Care Spring 2026
Visit amiquebec.org/sources for references

Don’t miss our updates! Click here to sign up for our emails
Please also follow us on