Towards Equal Gender Representation in the Annotations of Toxic Language Detection

9Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Classifiers tend to propagate biases present in the data on which they are trained. Hence, it is important to understand how the demographic identities of the annotators of comments affect the fairness of the resulting model. In this paper, we focus on the differences in the ways men and women annotate comments for toxicity, investigating how these differences result in models that amplify the opinions of male annotators. We find that the BERT model associates toxic comments containing offensive words with male annotators, causing the model to predict 67.7% of toxic comments as having been annotated by men. We show that this disparity between gender predictions can be mitigated by removing offensive words and highly toxic comments from the training data. We then apply the learned associations between gender and language to toxic language classifiers, finding that models trained exclusively on female-annotated data perform 1.8% better than those trained solely on male-annotated data, and that training models on data after removing all offensive words reduces bias in the model by 55.5% while increasing the sensitivity by 0.4%.

Cite

CITATION STYLE

APA

Excell, E., & Moubayed, N. A. (2021). Towards Equal Gender Representation in the Annotations of Toxic Language Detection. In GeBNLP 2021 - 3rd Workshop on Gender Bias in Natural Language Processing, Proceedings (pp. 55–65). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.gebnlp-1.7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free