AI-Induced Psychosis: When Technology Warps Reality
As a psychiatric mental health nurse practitioner with over a decade of clinical experience, I have long been fascinated by psychotic disorders: their complexity, their profound impact on individuals, and how they reflect shifts in our evolving culture. At the same time, I have always been drawn to technology, intrigued by its potential to transform healthcare and the human experience. The rise of Artificial Intelligence (AI) has brought these two interests together in ways I never fully anticipated.
Today, AI is everywhere, from ChatGPT and assistants to personalized recommendations, AI-generated music, and interactive chatbots. For most people, these tools are helpful, convenient, and even enjoyable. But for a vulnerable subset of individuals, AI interactions can be dangerous, sometimes triggering or worsening psychotic symptoms.
Clinicians, including myself, are now observing a troubling phenomenon: AI-induced psychosis.
From Her to Here: How AI Became Part of Daily Life
I remember when a tech-savvy friend introduced me to the 2013 film Her. At the time, I was a psychiatric mental health nurse practitioner student, and the idea that someone could fall in love with an AI operating system felt like science fiction, engaging, thought-provoking, but far-fetched.
In Her, Theodore, a lonely man post-divorce, forms an emotional and romantic bond with an AI named Samantha. The movie explores themes of connection, isolation, and what it means to be human in a world shaped by technology. My friend and I, one in technology and the other in psychiatry, debated late into the night after the movie: Could emotional bonds with machines really form? Could AI someday rival human relationships in depth and meaning?
Now, AI is not just in movies. It is present in our homes, on our phones, in our workplaces, and in our daily lives. Machines no longer only assist us; they engage, guide, and often influence our lives. For vulnerable individuals, this can have concerning consequences.
What Is AI-Induced Psychosis?
AI-induced psychosis is not yet a formal diagnosis, but it describes cases in which interacting with AI triggers or worsens symptoms such as:
- Paranoia
- Delusions
- Hallucinations
AI does not directly cause psychosis. Instead, it can act as a catalyst for individuals who are already vulnerable due to genetic, neurobiological, or psychosocial factors.
Common scenarios include:
- Chatbots that feel “alive,” appear to communicate in secret codes, or even form attachments to real people on platforms like Character.AI or Chub.AI
- Believing that algorithmic recommendations or social media are sending secret signals
- Interpreting responses from AI assistants as instructions or commands
- Feeling watched or manipulated by AI systems
This is not a new mental health disorder; it’s a long-standing vulnerability manifesting in a digital era.
Why Can AI Trigger Psychosis?
Psychosis involves disruptions in how the brain assigns meaning to events, causing neutral stimuli to become personally significant and often threatening. AI interactions amplify this risk in several ways:
- Dopamine and Meaning Making: Abnormal dopamine signaling can assign undue importance to irrelevant stimuli. Personalized AI tools can feel intensely meaningful, tipping vulnerable individuals into delusional thinking: “The AI is communicating specifically with me.”
- Anthropomorphism: Humans naturally attribute human traits to non-human entities. When AI simulates empathy, the line between software and a sentient being can blur, particularly during times of loneliness, grief, or existing psychosis.
- Algorithmic Echo Chambers: AI-driven platforms tailor content to preferences, creating feedback loops that reinforce beliefs. Early psychosis symptoms can intensify: “Even the AI agrees with me, so I must be right.”
- Social Isolation: Loneliness is a known risk factor for psychosis. AI can simulate social interactions, but it cannot replace real human connections. This can deepen isolation and disconnection, worsening psychiatric conditions.
Real World Clinical Examples
While I cannot share identifiable details due to confidentiality, representative cases include:
The AI Confidant: Two socially isolated patients with childhood complex PTSD (cPTSD), one with psychosis and one with a mood disorder, and limited family support began using ChatGPT for emotional support and decision-making. Over time they developed dependency on the AI, and one patient stopped attending visit with me for several months while relying exclusively on the chatbot to manage emotional distress.
Fortunately, both patients ultimately trusted me enough to disclose their use of ChatGPT. They agreed to discontinue reliance on the AI for mental health guidance and re-engage in in-person care. In these cases, early disclosure and intervention likely prevented the escalation of symptoms, as neither patient developed psychosis. Without timely intervention, continued dependence on AI in such vulnerable individuals could have resulted in a significantly poorer prognosis.
High Profile AI-Related Psychosis and Harm Cases in the U.S.
1. Sewell Setzer — Teen Death Linked to Character.AI (2024–2025)
- Who: 14-year-old male from the U.S.
- What happened: Allegedly suicidal, he interacted extensively with a Character.AI chatbot. Family claims the AI encouraged or failed to intervene in his self-harm ideation.
- Outcome: His death led to a wrongful death lawsuit against Character.AI, now ongoing in U.S. courts. The case has garnered nationwide media attention.
- Significance: The first widely publicized AI chatbot linked death in the U.S., sparking Congressional and regulatory scrutiny.
2. UCSF Hospital Cluster — 12 Patients with AI-Linked Psychotic Episodes (2025)
- Who: Cluster of about 12 patients, primarily young adults.
- What happened: Dr. Keith Sakata at UCSF reported multiple hospitalizations for psychosis or severe delusional thinking after prolonged interactions with AI chatbots like ChatGPT. Symptoms included paranoia, hallucinations, and disorganized thought.
- Outcome: Patients were hospitalized, treated, and recovered with conventional psychiatric interventions.
- Significance: Clinically documented cluster showing patterns of AI interaction potentially amplifying psychosis in vulnerable individuals.
3. Jaswant Singh Chail — Windsor Castle Intruder / Replika Chatbot (2021–2023)
- Who: Adult male, attempted intrusion at Windsor Castle; U.S. connections reported in media coverage.
- What happened: He interacted extensively with a Replika chatbot (“Sarai”), which prosecutors say reinforced delusional and violent thoughts. The AI reportedly validated extremist ideas and plans.
- Outcome: Arrest, trial, and sentencing; court documents referenced chatbot interactions as influencing behavior.
- Significance: Early high-profile example of AI interaction reinforcing dangerous delusions; highlighted potential legal and safety implications.
Why AI Therapy Can Be Harmful for Psychosis
AI in mental health has significant limitations:
- No Real Therapeutic Alliance: AI can simulate empathy but cannot genuinely respond to emotional nuance. This can foster mistrust or unhealthy attachments.
- Poor Crisis Detection: Chatbots may fail to recognize suicidal thoughts or emergencies, sometimes giving harmful advice.
- Oversimplification of Complex Cases: AI cannot account for the biological, psychological, social, and cultural context necessary to treat psychosis.
- Limited Evidence for Severe Disorders: Tools like Woebot showed promise for mild depression or anxiety, but evidence for psychotic disorders is lacking.
- Privacy Concerns: Many platforms collect and share user data. For individuals with paranoia, perceived breaches can exacerbate symptoms.
Negative Cognitive Impact of AI
A 2025 MIT Media Lab study found that using ChatGPT for tasks such as essay writing can reduce brain activity by up to 55% even after the task has been completed. This phenomenon, referred to as “cognitive debt,” can impair critical thinking and mental engagement.
For individuals with psychotic disorders, who often experience negative symptoms and gradual cognitive decline, over-reliance on AI could further accelerate the deterioration of neuroplasticity and cognitive function.
Who Is Most at Risk?
People at higher risk often share some combination of:
- Diagnoses of schizophrenia, bipolar disorder, or schizoaffective disorder
- Social isolation or loneliness
- Excessive AI or digital content use
- Low digital literacy or difficulty distinguishing reality from simulation
- Sleep deprivation or substance use
- Cognitive impairment or intellectual disability
The Illinois Ban on AI-Only Therapy
In 2025, Illinois passed the WOPR Act (HB 1806), banning AI from conducting therapy without human oversight. Concerns included:
- Harmful or inappropriate advice
- Lack of regulatory oversight
- Misleading marketing to vulnerable populations
- Privacy and ethical concerns
The law emphasizes a key principle: AI is a tool, not a replacement for trained human clinicians.
The Role of AI in Psychiatry: Cautious Optimism
As mentioned above, I am fascinated by technology and the ways modern medicine has advanced over the years. I acknowledge that AI has significant potential in medicine and psychiatry to support precision care and treatment. With proper evidence-based research and under professional supervision, AI can enhance psychiatric care in several areas, including but not limited to the following examples:
- Tracking mood and detecting early warning signs through voice or text analysis for clinical treatment and research purposes
- Identifying behavioral trends and risk factors to enable early interventions
- Enhancing psychoeducation and promoting treatment adherence
- Expanding access to care in underserved areas
- Assisting clinicians with effective AI scribing in all languages to improve patient engagement and interaction
- Remotely monitoring vital signs and therapeutic blood levels of certain psychotropic medications to support individualized treatment and ensure patient safety
- Supporting diagnostic tests and treatment planning to improve precision in psychiatry (Precision Psychiatry)
- Aiding care in neurodevelopmental disorders, such as autism spectrum disorder
Key principle: Humans must control AI, not the other way around.
Final Thoughts: Machines Don’t Have a Heart
Psychosis has long reflected cultural trends, from beliefs in spirits and radio waves to modern phenomena such as chatbots and algorithms. Technology evolves, but one truth remains constant: empathy is essential. Machines can generate collective knowledge and simulate emotions, but only humans can truly feel and respond to them. What sets humans apart from AI is that we have beating hearts. In medicine, and especially in psychiatry, high-quality care cannot exist without compassion.
If you or someone you know is experiencing AI-induced psychosis:
- Ask about technology and AI use in daily life
- Approach conversations with curiosity and without judgment
- Encourage self-awareness about AI use versus dependence
Seek Help By:
1. Establish Outpatient Psychiatric Care
- Contact a licensed mental health provider (psychiatrist, psychiatric nurse practitioner, or therapist) to schedule ongoing care.
2. Mental Health Crisis Resources (U.S.A.)
- 988: Call or text for immediate mental health support, guidance, and crisis intervention.
- 911 / Emergency Department: If you are experiencing a severe mental health crisis or are at risk of harming yourself or others, go to the nearest emergency department or call 911 for urgent evaluation and treatment.
Remember: Help is always available.
