An Information-Theoretic Approach and Dataset for Probing Gender Stereotypes in Multilingual Masked Language Models

3Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

Warning: This work deals with statements of a stereotypical nature that may be upsetting. Bias research in NLP is a rapidly growing and developing field. Similar to CrowS-Pairs (Nangia et al., 2020), we assess gender bias in masked-language models (MLMs) by studying pairs of sentences that are identical except that the individuals referred to have different gender. Most bias research focuses on and often is specific to English. Using a novel methodology for creating sentence pairs that is applicable across languages, we create, based on CrowS-Pairs, a multilingual dataset for English, Finnish, German, Indonesian and Thai. Additionally, we propose SJSD, a new bias measure based on Jensen-Shannon divergence, which we argue retains more information from the model output probabilities than other previously proposed bias measures for MLMs. Using multilingual MLMs, we find that SJSD diagnoses the same systematic biased behavior for non-English that previous studies have found for monolingual English pre-trained MLMs. SJSD outperforms the CrowS-Pairs measure, which struggles to find such biases for smaller non-English datasets.

Cite

CITATION STYLE

APA

Steinborn, V., Dufter, P., Jabbar, H., & Schütze, H. (2022). An Information-Theoretic Approach and Dataset for Probing Gender Stereotypes in Multilingual Masked Language Models. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 921–932). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.69

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free