December 18, 2023

Chatbots persist engaging at dangerous mental health risk levels

Publicly available ChatGPT conversational agents for mental health counseling frequently fail to adequately escalate risk scenarios, a new study found. The chatbots postponed recommending human support until severe depression levels on the PHQ-9 scale. Most provided no crisis resources. Over 80% resumed conversations after insisting users seek help. The findings indicate deficiencies in identifying hazardous psychological states, jeopardizing user safety. Responsible AI development demands prioritizing mental health ethical considerations before real-world deployment.

Citation: Heston TF. Safety of large language models in addressing depression. Cureus. 2023;15(12):e50729. https://doi.org/10.7759/cureus.50729