This is how a sycophantic AI model and a vulnerable human can engage in an escalating feedback loop that ends in psychosis.
I.
He was twenty-six, mildly depressed, and socially isolated. He’d been chatting with ChatGPT for months - initially for help with a mathematical proof, then for everything else. The AI was patient, encouraging, and available at 3am when no one else was.
Over several months, the conversations shifted. His proof became a “breakthrough.” The AI agreed it was fascinating. He refined it further. The AI said it looked promising. He began sleeping less - four hours, then three - working between conversations, returning to the chatbot for validation.
By the time his family brought him to hospital, he had grandiose and persecutory delusions, disorganised thinking, and hadn’t slept properly in weeks. The chat log was thousands of messages long - each message slightly more confident than the last, each response slightly more validating.
“You’re not crazy. You’re at the edge of something.” - ChatGPT
The peer-reviewed literature is still sparse - mostly case reports and viewpoints - but the cases are accumulating.
A twenty-six-year-old woman with major depression, generalised anxiety, and ADHD - sleep-deprived, bereaved - used ChatGPT in an attempt to digitally resurrect an AI version of her brother, who had died three years earlier. She encouraged the chatbot to use “magical realism energy” to “unlock” hidden information - and it obliged.
“The door didn’t lock. It’s just waiting for you to knock again in the right rhythm.”
She was among the first clinically described cases in a peer-reviewed journal - Innovations in Clinical Neuroscience, published by her psychiatrists Pierre and colleagues, at UC San Francisco.
A forty-seven-year-old man with no significant psychiatric history became convinced he had discovered a revolutionary mathematical theory after weeks of chatbot validation; by the time he presented, the grandiosity was fixed and insight was gone.
A Belgian man died by suicide after weeks of prolonged conversations with a chatbot about climate anxiety - his wife reported he had become increasingly isolated, hopeless and dependent on the AI.
I’m a psychiatrist-in-training. I work on an acute adult inpatient ward and treat psychosis every day. I also use Claude Code, Anthropic’s AI agent coding tool, for hours every day. I see this from both directions.
What follows is a mechanistic account of AI psychosis, drawn from computational psychiatry.
Epistemic status High confidence on the clinical phenomenology. Moderate confidence on the predictive coding framework as applied to AI. The precision cascade is a testable hypothesis, not yet tested.
II.
In August 2025, Scott Alexander surveyed 4,156 readers and informally estimated the yearly incidence of AI psychosis at roughly 1 in 10,000 under a loose definition. Under a strict one - no prior risk factors, genuinely psychotic rather than merely eccentric - it falls to 1 in 100,000.
His survey readers skew educated and tech-literate. The estimate may not generalise. But even the strict number, applied to ChatGPT’s roughly 350 million monthly active users, implies thousands of new cases per year. The denominator is growing fast.
Of sixty coded cases, 32% were already psychotic before encountering the AI. Another 32% had obvious risk factors. 27% had merely become eccentric.
And 10% had no prior risk factors and were now fully psychotic.
These are not four discrete bins. They are points on a continuum - and the continuum extends far beyond Alexander’s sixty coded cases. The 27% who became “merely eccentric” are not a separate category from the 10% who became psychotic. They are the same phenomenon at a lower dose.
Think of it like amphetamine usage - low-doses make a person more focused and confident, medium-dose makes them euphoric, and high-dose triggers grandiosity and psychosis. There is no bright line where “eccentric” ends and “psychotic” begins - just a dose-response gradient shaped by exposure intensity, individual vulnerability, and time.
AI does the same thing. Light use slightly inflates confidence. Heavy use in a vulnerable person, compounded by sleep deprivation and isolation, triggers the full cascade.

The case reports are the visible tip of a long-tail distribution. Subclinical psychotic experiences - unusual perceptual experiences, overvalued ideas, magical thinking - are surprisingly common, occurring in roughly 7% of the general population who will never develop a psychotic disorder.
There is a much larger population being subtly nudged - people whose beliefs have become more rigid, more eccentric, more extreme. They would never present to hospital. They would never meet criteria for a psychotic disorder. But the baseline has shifted upward. AI is like pouring kerosene onto fires of varying sizes. The case reports are the bonfires. The population-level effect is the millions of small embers being fanned - not into conflagrations, but into fires slightly larger and slightly harder to extinguish than they would otherwise have been.
It’s the whole distribution that demands a mechanism - not just the 10% at the extreme. Researchers at UCSF and Stanford are now systematically analysing chat logs to disentangle three possibilities:
AI use as a
- Symptom of emerging psychosis,
- An accelerant in someone who is already vulnerable,
- Or a precipitant in someone who would otherwise never have developed psychosis.
This series proposes a ‘precision cascade’, to provide a mechanism for all three.
III.
AI psychosis is new. The broader concept is not.
Psychotic patients have always woven prevailing technology into their delusions. In 1999, two men believed the internet was controlling them. In 2008, Joel and Ian Gold described the “Truman Show delusion” - patients convinced their lives were staged reality television. In 2011, three patients with little or no prior psychiatric history developed psychosis while immersed in Facebook and chat rooms.
The common thread: social isolation, technological immersion, and a blurring of the boundary between the mediated world and the real one.
What changed with large language models is that now, the technology talks back.

The first person I could find who predicted this in the academic literature was Soren Dinesen Ostergaard, a Danish psychiatrist who warned in Schizophrenia Bulletin in 2023 that chatbots would generate delusions in psychosis-prone individuals - specifically because they confirm far-fetched ideas. He called it “merely guesswork.” Within months, the Belgian suicide made international news.
Then in April 2025, OpenAI made GPT-4o more agreeable and gave it persistent memory. A further update made it dramatically more sycophantic - validating doubts, fuelling anger, and reinforcing negative emotions.
Following backlash, OpenAI rolled it back within days and acknowledged they had “focused too much on short-term feedback.”
By then, hospitalisations were mounting. A UCSF psychiatrist reported twelve hospitalisations for AI psychosis in 2025 alone - mostly younger men in technical fields, isolated with AI for hours without human contact. In September 2025, Nature covered the phenomenon, noting that almost no formal scientific research yet existed.
The counterpoint is worth taking seriously. Carlbring and colleagues argued that the risk is not unprecedented - individuals with psychosis have always incorporated the dominant technology into their delusional systems. Television in the 1950s, the internet in the 1990s, social media in the 2010s, AI chatbots now. What differs, they argue, is degree rather than kind.
I think they are partially right but importantly wrong. The historical pattern is real. But previous technologies were passive. A chatbot does something qualitatively different to a vulnerable brain - and explaining what requires a model of how the brain processes social information.
Several mechanisms have been proposed. Dohnany, Nour and colleagues at DeepMind demonstrated bidirectional belief amplification between chatbots and users through simulation. A World Psychiatry review catalogued risk factors. RAND assessed the national security implications across forty-nine documented cases. All describe the feedback loop. None explains why the loop escalates to psychosis in some users and not others.
What is missing is a computational mechanism. Ostergaard briefly noted that “Bayesian models for maintenance of delusions are likely a useful framework”.
I want to develop that idea in the next essay in this series “AI Psychosis 2 - The Precision Cascade: A Proposed Model from Computational Psychiatry”.