Monolingual and Multilingual Reduction of Gender Bias in Contextualized Representations

27Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.

Abstract

Pretrained language models (PLMs) learn stereotypes held by humans and reflected in text from their training corpora, including gender bias. When PLMs are used for downstream tasks such as picking candidates for a job, people’s lives can be negatively affected by these learned stereotypes. Prior work usually identifies a linear gender subspace and removes gender information by eliminating the subspace. Following this line of work, we propose to use DensRay, an analytical method for obtaining interpretable dense subspaces. We show that DensRay performs on-par with prior approaches, but provide arguments that it is more robust and provide indications that it preserves language model performance better. By applying DensRay to attention heads and layers of BERT we show that gender information is spread across all attention heads and most of the layers. Also we show that DensRay can obtain gender bias scores on both token and sentence levels. Finally, we demonstrate that we can remove bias multilingually, e.g., from Chinese, using only English training data.

Cite

CITATION STYLE

APA

Liang, S., Dufter, P., & Schütze, H. (2020). Monolingual and Multilingual Reduction of Gender Bias in Contextualized Representations. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 5082–5093). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.446

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free