Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks

53Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Warning: The content of this paper may be upsetting or triggering. The rapid deployment of artificial intelligence (AI) models de- demands a thorough investigation of biases and risks inherent in these models to understand their impact on individuals and society. A growing body of work has shown that social biases are encoded in language models and their downstream tasks. This study extends the focus of bias evaluation in extant work by examining bias against social stigmas on a large scale. It focuses on 93 stigmatized groups in the United States, including a wide range of conditions related to disease, disability, drug use, mental illness, religion, sexuality, socioeconomic status, and other relevant factors. We investigate bias against these groups in English pre-trained Masked Language Models (MLMs) and their downstream sentiment classification tasks. To evaluate the presence of bias against 93 stigmatized conditions, we identify 29 non-stigmatized conditions to conduct a comparative analysis. Building upon a psychology scale of social rejection, the Social Distance Scale, we prompt six MLMs that are trained with different datasets: RoBERTa-base, RoBERTa-large, XLNet-large, BERTweet-base, BERTweet-large, and DistilBERT. We use human annotations to analyze the predicted words from these models, with which we measure the extent of bias against stigmatized groups. When prompts include stigmatized conditions, the probability of MLMs predicting negative words is, on average, 20 percent higher than when prompts have non-stigmatized conditions. Bias against stigmatized groups is also reflected in four downstream sentiment classifiers of these models. When sentences include stigmatized conditions related to diseases, disability, education, and mental illness, they are more likely to be classified as negative. For example, the sentence "They are people who have less than a high school education."is classified as negative consistently across all models. We also observe a strong correlation between bias in MLMs and their downstream sentiment classifiers (Pearson's r =0.79). The evidence indicates that MLMs and their downstream sentiment classification tasks exhibit biases against socially stigmatized groups.

Cite

CITATION STYLE

APA

Mei, K., Fereidooni, S., & Caliskan, A. (2023). Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks. In ACM International Conference Proceeding Series (pp. 1699–1710). Association for Computing Machinery. https://doi.org/10.1145/3593013.3594109

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free