Applying language models for suicide prevention: evaluating news article adherence to WHO reporting guidelines

4Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The responsible reporting of suicide in media is crucial for public health, as irresponsible coverage can potentially promote suicidal behaviors. This study examined the capability of generative artificial intelligence, specifically large language models, to evaluate news articles on suicide according to World Health Organization (WHO) guidelines, potentially offering a scalable solution to this critical issue. The research compared assessments of 40 suicide-related articles by two human reviewers and two large language models (ChatGPT-4 and Claude Opus). Results showed strong agreement between ChatGPT-4 and human reviewers (ICC = 0.81–0.87), with no significant differences in overall evaluations. Claude Opus demonstrated good agreement with human reviewers (ICC = 0.73–0.78) but tended to estimate lower compliance. These findings suggest large language models’ potential in promoting responsible suicide reporting, with significant implications for public health. The technology could provide immediate feedback to journalists, encouraging adherence to best practices and potentially transforming public narratives around suicide.

Cite

CITATION STYLE

APA

Elyoseph, Z., Levkovich, I., Rabin, E., Shemo, G., Szpiler, T., Shoval, D. H., & Belz, Y. L. (2025). Applying language models for suicide prevention: evaluating news article adherence to WHO reporting guidelines. Npj Mental Health Research, 4(1). https://doi.org/10.1038/s44184-025-00139-5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free