A Robust Bias Mitigation Procedure Based on the Stereotype Content Model

14Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The Stereotype Content model (SCM) states that we tend to perceive minority groups as cold, incompetent or both. In this paper we adapt existing work to demonstrate that the Stereotype Content model holds for contextualised word embeddings, then use these results to evaluate a fine-tuning process designed to drive a language model away from stereotyped portrayals of minority groups. We find the SCM terms are better able to capture bias than demographic agnostic terms related to pleasantness. Further, we were able to reduce the presence of stereotypes in the model through a simple fine-tuning procedure that required minimal human and computer resources, without harming downstream performance. We present this work as a prototype of a debiasing procedure that aims to remove the need for a priori knowledge of the specifics of bias in the model.

Cite

CITATION STYLE

APA

Ungless, E. L., Rafferty, A., Nag, H., & Ross, B. (2022). A Robust Bias Mitigation Procedure Based on the Stereotype Content Model. In NLPCSS 2022 - 5th Workshop on Natural Language Processing and Computational Social Science ,NLP+CSS, Held at the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 207–217). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.nlpcss-1.23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free