Moderating mental health: Addressing the human–machine alignment problem through an adaptive logic of care

0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Covid-19 deepened the need for digital-based support for people experiencing mental ill-health. Discussion platforms have long filled gaps in health service provision and access, offering peer-based support usually maintained by a mix of professional and volunteer peer moderators. Even on dedicated support platforms, however, mental health content poses difficulties for human and machine moderation. While automated systems are considered essential for maintaining safety, research is lagging in understanding how human and machine moderation interacts when addressing mental health content. Working with three digital mental health services, we examine the interaction between human and automated moderation of discussion platforms, contrasting ‘reactive’ and ‘adaptive’ moderation practices. Presenting ways forward for improving digital mental health services, we argue that an integrated ‘adaptive logic of care’ can help manage the interaction between human and machine moderators as they address a tacit ‘risk matrix’ when dealing with sensitive mental health content.

Cite

CITATION STYLE

APA

McCosker, A., Kamstra, P., & Farmer, J. (2023). Moderating mental health: Addressing the human–machine alignment problem through an adaptive logic of care. New Media and Society. https://doi.org/10.1177/14614448231186800

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free