Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research

7Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Social media has shaken the foundations of our society, unlikely as it may seem. Many of the popular tools used to moderate harmful digital content, however, have received widespread criticism from both the academic community and the public sphere for middling performance and lack of accountability. Though social media research is thought to center primarily on natural language processing, we demonstrate the need for the community to understand multimedia processing and its unique ethical considerations. Specifically, we identify statistical differences in the performance of Amazon Turk (MTurk) annotators when different modalities of information are provided and discuss the patterns of harm that arise from crowd-sourced human demographic prediction. Finally, we discuss the consequences of those biases through auditing the performance of a toxicity detector called Perspective API on the language of Twitter users across a variety of demographic categories.

Cite

CITATION STYLE

APA

Jiang, J., & Vosoughi, S. (2020). Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research. In FATE/MM 2020 - Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia (pp. 6–12). Association for Computing Machinery, Inc. https://doi.org/10.1145/3422841.3423534

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free