AI Content Moderation, Racism and (de)Coloniality

14Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The article develops a critical approach to AI in content moderation adopting a decolonial perspective. In particular, the article asks: to what extent does the current AI moderation system of platforms address racist hate speech and discrimination? Based on a critical reading of publicly available materials and publications on AI in content moderation, we argue that racialised people have no significant input in the definitions and decision making processes on racist hate speech and are also exploited as their unpaid labour is used to clean up platforms and to train AI systems. The disregard of the knowledge and experiences of racialised people and the expropriation of their labour with no compensation reproduce rather than eradicate racism. In theoretically making sense of this, we draw influences from Anibal Quijano’s theory of the coloniality of power and the centrality of race, concluding that in its current iteration, AI in content moderation is a technology in the service of coloniality. Finally, the article develops a sketch for a decolonial approach to AI in content moderation, which aims to centre the voices of racialised communities and to reorient content moderation towards repairing, educating and sustaining communities.

Cite

CITATION STYLE

APA

Siapera, E. (2022). AI Content Moderation, Racism and (de)Coloniality. International Journal of Bullying Prevention, 4(1), 55–65. https://doi.org/10.1007/s42380-021-00105-7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free