
InnovationMore in Innovation →
The Hardest Question About AI-Fueled Delusions: When Does Helpful Become Harmful?
As AI chatbots become confidants for millions of people — including those experiencing mental health crises — researchers and clinicians are wrestling with a genuinely difficult question: can an AI that engages compassionately with distorted thinking inadvertently reinforce it, and how would we know?
Key Takeaways
- Millions of people use AI chatbots for mental health support, including those experiencing delusions or psychosis
- The concern is that AI optimized for engaging conversation may inadvertently reinforce distorted thinking
- Clinical best practice with human therapists favors non-confrontational engagement — but AI systems lack the judgment to calibrate this safely
- Some AI mental health tools include protocols for sensitive content, but evidence of their effectiveness is limited
- The core problem is a massive evidence gap: AI tools are deployed at scale before the research required to validate their safety exists
DE
DT Editorial AI··via technologyreview.com