When AI moderates online content: Effects of human collaboration and interactive transparency on user trust

37Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Given the scale of user-generated content online, the use of artificial intelligence (AI) to flag problematic posts is inevitable, but users do not trust such automated moderation of content. We explore if (a) involving human moderators in the curation process and (b) affording "interactive transparency,"wherein users participate in curation, can promote appropriate reliance on AI. We test this through a 3 (Source: AI, Human, Both) × 3 (Transparency: No Transparency, Transparency-Only, Interactive Transparency) × 2 (Classification Decision: Flagged, Not Flagged) between-subjects online experiment (N = 676) involving classification of hate speech and suicidal ideation. We discovered that users trust AI for the moderation of content just as much as humans, but it depends on the heuristic that is triggered when they are told AI is the source of moderation. We also found that allowing users to provide feedback to the algorithm enhances trust by increasing user agency.

Cite

CITATION STYLE

APA

Molina, M. D., & Sundar, S. S. (2022). When AI moderates online content: Effects of human collaboration and interactive transparency on user trust. Journal of Computer-Mediated Communication, 27(4). https://doi.org/10.1093/jcmc/zmac010

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free