This study presents MentalLLaMA, the first open-source Large Language Model (LLM) designed for interpretable mental health analysis on social media. Traditional methods for automatic mental health analysis on social media suffer from low interpretability. MentalLLaMA aims to provide detailed explanations along with predictions, improving interpretability. Despite some challenges, like lack of high-quality training data, the authors have developed a multi-task, multi-source Interpretable Mental Health Instruction (IMHI) dataset with 105K data samples. The performance of MentalLLaMA, evaluated on the IMHI benchmark, shows promising results in both prediction accuracy and explanation quality.

 

Publication date: 2018-06-03
Project Page: https://github.com/SteveKGYang/MentalLLaMA
Paper: https://arxiv.org/pdf/2309.13567