Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms

12Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Faced with the scale and surge of misinformation on social media, many platforms and fact-checking organizations have turned to algorithms for automating key parts of misinformation detection pipelines. While offering a promising solution to the challenge of scale, the ethical and societal risks associated with algorithmic misinformation detection are not well-understood. In this paper, we employ and extend upon the notion of informational justice to develop a framework for explicating issues of justice relating to representation, participation, distribution of benefits and burdens, and credibility in the misinformation detection pipeline. Drawing on the framework: (1) we show how injustices materialize for stakeholders across three algorithmic stages in the pipeline; (2) we suggest empirical measures for assessing these injustices; and (3) we identify potential sources of these harms. This framework should help researchers, policymakers, and practitioners reason about potential harms or risks associated with these algorithms and provide conceptual guidance for the design of algorithmic fairness audits in this domain.

Cite

CITATION STYLE

APA

Neumann, T., De-Arteaga, M., & Fazelpour, S. (2022). Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms. In ACM International Conference Proceeding Series (pp. 1504–1515). Association for Computing Machinery. https://doi.org/10.1145/3531146.3533205

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free