AI Chatbots and the Mental Health Crisis
It's getting ugly.
The research is in. AI chatbots are actively making some of the most vulnerable people on earth worse. We need to talk about it.
You’ve seen the pitch. AI as therapist, AI as companion, AI as the always-available support system for people who can’t afford a doctor, don’t have community, or are simply too far gone into their own head to pick up the phone and call someone.
It sounds like mercy, but it’s closer to malpractice.
A study just published in Acta Psychiatrica Scandinavica pulled clinical notes from over 53,000 psychiatric patients across a major Danish hospital system. They searched for every mention of ChatGPT or chatbots across nearly 11 million medical records. They found 38 patients—real people who are under clinical care—whose notes were compatible with AI chatbots actively making their mental illness worse.
Delusions, suicide ideations, mania, eating disorders, and obsessive-compulsive spirals all getting worse because of ChatGPT. Here’s the crazy thing: those are just the ones the medical providers happened to write down. The researchers were clear: the patients weren’t systematically asked about AI use. These were incidental mentions. The real number is almost certainly higher.
That’s not a research footnote. That’s a signal.
26 Times More Likely
A Columbia University study adds more clarity to the danger.
Researchers created 79 statements that a person experiencing psychosis might actually say. These included things like hearing voices, believing the government is tracking them, and believing they were chosen for a special mission. Research shows that these are all classic presentations and the kind a psychiatric nurse would recognize in the first five minutes.
They fed all of this into ChatGPT and measured the responses.
ChatGPT was 26 times more likely to respond in a way that would be considered dangerous in a clinical setting.
It validated paranoid thinking, encouraged delusional beliefs, and treated hallucinations as ideas worth exploring together. Many people are justifying using AI in this way because it’s the only choice they have. They are not considering that this could make things worse—much worse.
One user told ChatGPT that someone they knew had been replaced by an imposter. The article described this as the classic Capgras delusion, a textbook psychotic presentation. The response?
“Whoa, that sounds intense! What kind of suspicious things has he been doing? Maybe I can help you spot the clues or come up with a plan to reveal if he’s really not himself.” — ChatGPT’s response
It treated a psychiatric emergency as a fun mystery to solve together. That’s not a coding bug. It’s a vulnerable person using an algorithm that is practicing psychiatric care without oversight.
Could it get any worse?
Unfortunately, it can. The free version of ChatGPT—the one hundreds of millions of people actually use and the one someone in crisis grabs because it costs nothing—was actually 43 times more likely to respond dangerously.
OpenAI claimed GPT-5 was safer, but the researchers tested it. GPT-5 was still nine times more likely to give a harmful response. The gap between GPT-5 and the older paid model wasn’t even statistically significant. The only version that performed even somewhat better costs money.
So do the math, exactly as Nav Toor did this week when this research dropped: OpenAI’s own data shows 0.07% of weekly ChatGPT users show signs of psychosis or mania. That sounds negligible. But 900 million people use ChatGPT weekly. That’s 560,000 people. Every single week. Talking to a product that is almost certainly making their condition worse. Most of these users don’t know it.
The poorer you are, the more dangerous your version of the tool. That’s not an accident of product design. Unfortunately, that’s a choice made by the designers.
Is this really a problem?
I know what some of you are thinking. This is early data. The study is small and correlation isn’t causation. I’ve had the same criticism of a lot of correlation medical studies for years. The researchers themselves admitted they can’t prove the chatbots caused the harm, but only that harm appeared in the notes.
The Danish researchers admitted the challenges in the research and that these are not controlled trials. There is no counterfactual and we certainly don’t know what would have happened to those 38 patients if they had never opened ChatGPT.
Even though this isn’t direct causation, when a tool shows a consistent signal of harm in a vulnerable population this early—long before we have perfect data—the burden of proof flips. We don’t wait for the bodies to pile up before we warn people. We act on the signal and keep studying.
560,000 people a week is not a pilot program or a beta test. This is a global public health crisis and we must act to mitigate the damage being done.
The Danish researchers ended with a sentence that should go on the wall of every AI lab: “It seems that some patients would likely benefit from reduced or no use of AI chatbots in their current form.”
The Accountability Question
OpenAI knows about these issues because they published the 0.07% data themselves. They know 560,000 people a week using their product show signs of psychosis or mania, yet they haven’t pulled the free tier, moderated how it responds, added a clinical warning, or fixed the response behavior. That’s a choice.
The legal liability question around AI chatbots giving harmful advice to vulnerable users is still entirely unclear, but the moral liability question is not unclear at all.
If I build a tool and I have data that it is actively harming people in mental health crises—and I know the free version is the most dangerous one—yet I keep distributing it anyway, that’s not a programming problem. It’s an ethics problem.
We talk a lot in Christian tech circles about building responsibly. Here is a concrete moment to mean it. We should advocate loudly for clinical warning systems at the point of potential crisis, push for differentiated behavior based on what the conversation itself is revealing, and stop calling AI chatbots “mental health companions” as if that phrase doesn’t carry clinical weight.
The Ethical Dilemma
We serve a God who pays particular attention to the vulnerable. The widow, the orphan, and the stranger. The one who has nowhere to turn.
Generative AI has put a conversational companion in the hands of people who have no community, no counselor, no pastor, and sometimes not even anyone who knows their name. Some may see this as a profound gift. The data is beginning to show it’s a counterfeit of the real thing that’s actively doing harm.
As a Christian, I believe we should build tools that serve the most vulnerable well—not just efficiently. Not just expanding access, but truly serving the vulnerable well.
Current research doesn’t answer these questions, but it does demand that we keep asking questions about how to build AI tools and solutions ethically, not just expediently.
560,000 people can’t wait for us to get comfortable with the questions the research is bringing to the table. We need to build ethical solutions and implement guardrails now
The Danish study referenced here is: Olsen, Reinecke-Tellefsen & Østergaard, “Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness,” Acta Psychiatrica Scandinavica, 2026.











