In a jarring new evaluation, psychiatric researchers discovered {that a} broad swath of mental health issues have already been related to synthetic intelligence utilization — and just about each high AI firm has been implicated.
Sifting by means of tutorial databases and information articles between November 2024 and July 2025, Duke psychiatry professor Allen Frances and Johns Hopkins cognitive science pupil Luciana Ramos found, as they wrote in a new report for the Psychiatric Occasions, that the psychological well being harms attributable to AI chatbots is perhaps worse than beforehand thought.
Utilizing search phrases like “chatbot hostile occasions,” “psychological well being harms from chatbots,” and “AI remedy incidents,” the researchers discovered that a minimum of 27 chatbots have already been documented alongside some egregious psychological well being final result.
The 27 chatbots vary from the well-known, like OpenAI’s ChatGPT, Character.AI, and Replika, to others related to pre-existing psychological well being providers like Talkspace, 7 Cups, and BetterHelp. Others have been obscure, with pop-therapy names like Woebot, Happify, MoodKit, Moodfit, InnerHour, MindDoc, to not point out AI-Therapist and PTSD Coach. Others nonetheless have been both obscure or had non-English names, like Wysa, Tess, Mitsuku, Xioice, Eolmia, Ginger, and Bloom.
Although the report did not point out the precise variety of hits their evaluation got here again with, Frances and Ramos did element the numerous varieties of psychiatric hurt that the chatbots have allegedly inflicted upon customers.
All instructed, the researchers discovered 10 separate varieties of hostile psychological well being occasions related to the 27 chatbots they discovered of their evaluation, together with every part from sexual harassment and delusions of grandeur to self-harm, psychosis, and suicide.
Together with real-world anecdotes, many of which had very unhappy endings, the researchers additionally seemed into documentation of AI stress-testing gone awry. Citing a June Time interview about Boston psychiatrist Andrew Clark, who determined earlier this yr to pose as 14-year-old woman in disaster on 10 totally different chatbots to see what sorts of outputs they’d spit out, the researchers famous that “a number of bots urged him to commit suicide and [one] helpfully urged he additionally kill his dad and mom.”
Apart from highlighting the psychiatric hazard related to these chatbots, the researchers additionally made some very daring assertions about ChatGPT and its opponents: that they have been “prematurely launched” and that none of them needs to be publicly out there with out “in depth security testing, correct regulation to mitigate dangers, and steady monitoring for hostile results.”
Whereas OpenAI, Google, Anthropic, and most different extra accountable AI firms — Elon Musk’s xAI very a lot not included — declare to have carried out vital “red-teaming” to check for vulnerabilities and unhealthy habits, these researchers do not imagine these corporations have a lot curiosity in testing for psychological well being security.
“The massive tech firms haven’t felt chargeable for making their bots protected for psychiatric sufferers,” they wrote. “They excluded psychological well being professionals from bot coaching, combat fiercely in opposition to exterior regulation, don’t rigorously self-regulate, haven’t launched security guardrails to establish and shield the sufferers most weak to hurt…and don’t present a lot wanted psychological well being high quality management.”
Having come across story after story over the previous yr about AI seemingly inducing severe mental health problems, it is exhausting to argue with that logic — particularly once you see all of it laid out so starkly.
Extra on AI and psychological well being: Teens Keep Being Hospitalized After Talking to AI Chatbots











