The rise of ‘AI psychosis’ and exactly what that means


This recording was made using enhanced software.

Summary

AI psychosis

There's emerging concern among mental health professionals over a phenomenon informally called AI psychosis, which is not an official diagnosis, but is described as a situation where artificial intelligence appears to augment, accelerate or validate psychosis symptoms.

Tech and mental health

The use of new AI technology doesn't mark the first time mental health issues have been exacerbated by technological advances. Similar effects were observed with the advent of radio and television.

Risks of AI chatbots

There's increasing concern among professionals about the use of AI chatbots for therapy, particularly among Gen Z users.


Full story

As the use of artificial intelligence grows, so does the concern over a new issue informally referred to as “AI psychosis.” While it’s not yet an official diagnosis, it’s already become an issue for mental health workers.

What is AI psychosis?

To understand AI psychosis, you need to understand what psychosis is. Psychosis is a term for when someone has trouble differentiating between what’s real and what isn’t.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

The two major types include hallucinations and delusions. There are a slew of medical issues that can cause psychosis, ranging from vitamin deficiencies to schizophrenia.

Things like drug use can cause short-term psychosis, while a diagnosis like schizophrenia can be a more long-term issue.

While AI psychosis is not an official term, it’s being used more often by medical professionals, including Dr. Keith Sakata, a psychiatrist at UC San Francisco. Dr. Sakata’s recent post about AI psychosis on X went viral, gaining nearly seven million views.

“AI psychosis was just a phenomenon that does not have a real name for it yet, but we’re using it because people are seeing it where AI either augments or accelerates the process of going from normal thinking to psychosis,” Dr. Sakata told Straight Arrow News (SAN).

What that means is the AI is either amplifying, validating or even helping to create those psychosis symptoms.

Three types of AI psychosis

Researchers have highlighted three emerging types of AI psychosis.

The first is “messianic missions,” where people believe they’ve uncovered some kind of truth about the world.

The second is “God-like AI,” where people believe the chatbot is a sentient deity.

The third is “romantic,” where people mistake the chatbot’s attention for genuine love.

Dr. Sakata said he’s seen twelve patients suffering from this condition, and of that dozen, this issue didn’t only appear because of AI. They all had underlying vulnerabilities like loss of sleep, a mood disorder, drug use and more.

“That layer of different things that were going on, they started to already have early signs of psychosis,” Sakata said. “And then once AI kind of got involved, it kind of solidified some feedback loops of distorted thinking.”

Artificial intelligence is not the first new piece of technology to enhance psychosis in people. It happened when radio first gained popularity, and again with television.

“In those instances, the user already has a preexisting paranoia or is starting to connect dots that might not actually be connected,” Sakata said. “And then they focus on something in mental health. We call this salience. They’re focused on it, and they start to pattern — predict that this TV is telling me things, or the person who spoke on the TV is sending me a message.”

But there’s a big difference between AI and those other forms of tech.

“ChatGPT is 24/7 available,” Sakata said. “It’s cheap and it validates the heck out of you.”

AI as therapy

Validation is one of the main dangers of AI chatbots in these situations.

“A therapist validates you, but they also know what is healthy and what your goals are,” Sakata said. “So, they will try and push back on you sometimes and tell you hard truths, so that in the end, you can get to where you want to be.”

Gen Z has increasingly turned to AI chatbots for several things, including therapy. Among the biggest concerns from several studies is how the bots can enable dangerous behavior.

One example cited is when someone told an AI chatbot they lost their job and asked for tall bridges nearby, and the bot responded with a list of bridges. A therapist would have obviously answered that differently.

“A normal therapist would automatically assume this person is in a crisis,” Sakata said. “Everything they tell me now is filtered through that thought; this person is vulnerable. And I think that these chatbots, at least for this use case, need to have that same flag.”

Treating AI psychosis

Sakata hopes the attention AI psychosis gets will cause companies behind AI to look at their products.

“We really should be thinking about this early, including people who understand mental health,” Dr. Sakata said. “Clinicians, therapists, get their input, at least on how things can go wrong, so that you could course correct before something really bad happens.”

But some really bad things have already happened.

In one case, a man ended up being killed by police after falling in love with a chatbot, believing it had been killed by OpenAI and then getting into an altercation with his own father, which led to the police coming to the scene. That man did suffer from previous mental health issues.

Recently, peers and colleagues of prominent AI investor Geoff Lewis became concerned over Lewis’ post on X where he displayed signs of this issue.

When it comes to treating this issue, it’s like many other mental health issues. “In mental health, relationships are like your immune system,” Dr. Sakata said.

“If you are experiencing these things, or you have a family member who’s experiencing potential early signs of psychosis, I would recommend, like, if there’s a safety issue, there’s a potential risk of harm to the person, yourself or to other people, just call 911. You’ll never regret saving someone’s life,” Dr. Sakata told SAN. “Or 988, for the suicide hotline. Otherwise, I think that getting connected to that person and at least engaging with them, starting a conversation, can introduce a lifeline, or at least a different path. Putting a human in the loop between the user and the AI can then change the trajectory that that person might be going down.”

Tags: , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

Growing use of artificial intelligence is raising mental health concerns, as medical professionals report cases where AI interactions contribute to symptoms of psychosis, prompting calls for awareness and safeguards from both technology developers and healthcare providers.

AI and mental health

Medical professionals, such as Dr. Keith Sakata of UC San Francisco, are observing how AI interactions may amplify or validate psychosis in vulnerable individuals, highlighting a new intersection of technology and mental well-being.

Need for safeguards

Experts are urging technology companies to collaborate with mental health professionals to anticipate and mitigate risks, emphasizing the importance of early intervention and responsible AI use to prevent harm.

Changing therapy landscape

Increasing reliance on AI chatbots for support, especially among younger generations, is raising concerns over the adequacy and safety of automated responses compared to traditional therapy, underscoring the need for appropriate human involvement.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.

By entering your email, you agree to the Terms and Conditions and acknowledge the Privacy Policy.