Online Extremism, AI, and (Human) Content Moderation

  • Barnes M
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

This paper has three main goals: (1) to clarify the role of artificial intelligence (AI)—along with algorithms more broadly—in online radicalization that results in “real world violence,” (2) to argue that technological solutions (like better AI) are inadequate proposals for this problem given both technical and social reasons, and (3) to demonstrate that platform companies’ (e.g., Meta, Google) statements of preference for technological solutions functions as a type of propaganda that serves to erase the work of the thousands of human content moderators and to conceal the harms they experience. I argue that the proper assessment of these important, related issues must be free of the obfuscation that the “better AI” proposal generates. For this reason, I describe the AI-centric solutions favoured by major platform companies as a type of obfuscating and dehumanizing propaganda.

Cite

CITATION STYLE

APA

Barnes, M. R. (2022). Online Extremism, AI, and (Human) Content Moderation. Feminist Philosophy Quarterly, 8(3/4). https://doi.org/10.5206/fpq/2022.3/4.14295

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free