Large language models for the mental health community: framework for translating code to care

26Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Large language models (LLMs) offer promising applications in mental health care to address gaps in treatment and research. By leveraging clinical notes and transcripts as data, LLMs could improve diagnostics, monitoring, prevention, and treatment of mental health conditions. However, several challenges persist, including technical costs, literacy gaps, risk of biases, and inequalities in data representation. In this Viewpoint, we propose a sociocultural–technical approach to address these challenges. We highlight five key areas for development: (1) building a global clinical repository to support LLMs training and testing, (2) designing ethical usage settings, (3) refining diagnostic categories, (4) integrating cultural considerations during development and deployment, and (5) promoting digital inclusivity to ensure equitable access. We emphasise the need for developing representative datasets, interpretable clinical decision support systems, and new roles such as digital navigators. Only through collaborative efforts across all stakeholders, unified by a sociocultural–technical framework, can we clinically deploy LLMs while ensuring equitable access and mitigating risks.

Cite

CITATION STYLE

APA

Malgaroli, M., Schultebraucks, K., Myrick, K. J., Andrade Loch, A., Ospina-Pinillos, L., Choudhury, T., … Torous, J. (2025, April 1). Large language models for the mental health community: framework for translating code to care. The Lancet Digital Health. Elsevier Ltd. https://doi.org/10.1016/S2589-7500(24)00255-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free