Large language models for the mental health community: framework for translating code to care

Matteo Malgaroli, Katharina Schultebraucks, Keris Jan Myrick, Alexandre Andrade Loch, Laura Ospina-Pinillos, Tanzeem Choudhury, Roman Kotov, Munmun De Choudhury, John Torous

Producción: Contribución a una revistaArtículo de revisiónrevisión exhaustiva

Resumen

Large language models (LLMs) offer promising applications in mental health care to address gaps in treatment and research. By leveraging clinical notes and transcripts as data, LLMs could improve diagnostics, monitoring, prevention, and treatment of mental health conditions. However, several challenges persist, including technical costs, literacy gaps, risk of biases, and inequalities in data representation. In this Viewpoint, we propose a sociocultural–technical approach to address these challenges. We highlight five key areas for development: (1) building a global clinical repository to support LLMs training and testing, (2) designing ethical usage settings, (3) refining diagnostic categories, (4) integrating cultural considerations during development and deployment, and (5) promoting digital inclusivity to ensure equitable access. We emphasise the need for developing representative datasets, interpretable clinical decision support systems, and new roles such as digital navigators. Only through collaborative efforts across all stakeholders, unified by a sociocultural–technical framework, can we clinically deploy LLMs while ensuring equitable access and mitigating risks.

Idioma originalInglés
PublicaciónThe Lancet Digital Health
DOI
EstadoAceptada/en prensa - 2025

Huella

Profundice en los temas de investigación de 'Large language models for the mental health community: framework for translating code to care'. En conjunto forman una huella única.

Citar esto