This article investigates the experiences of individuals who have used Large Language Model (LLM) chatbots for mental health support. The authors conducted interviews with 21 globally diverse individuals and analyzed how users create unique support roles for their chatbots. The article introduces the concept of ‘therapeutic alignment’, which is the alignment of AI with therapeutic values for mental health contexts. The study offers recommendations for the ethical and effective use of LLM chatbots and other AI mental health support tools. However, the authors also acknowledge potential risks and harms associated with the use of these chatbots.

 

Publication date: 25 Jan 2024
Project Page: https://arxiv.org/abs/2401.14362v1
Paper: https://arxiv.org/pdf/2401.14362