You Reap What You Sow: On the Challenges of Bias Evaluation Under Multilingual Settings

47Citations
Citations of this article
58Readers
Mendeley users who have this article in their library.

Abstract

Evaluating bias, fairness, and social impact in monolingual language models is a difficult task. This challenge is further compounded when language modeling occurs in a multilingual context. Considering the implication of evaluation biases for large multilingual language models, we situate the discussion of bias evaluation within a wider context of social scientific research with computational work. We highlight three dimensions of developing multilingual bias evaluation frameworks: (1) increasing transparency through documentation, (2) expanding targets of bias beyond gender, and (3) addressing cultural differences that exist between languages. We further discuss the power dynamics and consequences of training large language models and recommend that researchers remain cognizant of the ramifications of developing such technologies.

Cite

CITATION STYLE

APA

Talat, Z., Névéol, A., Biderman, S., Clinciu, M., Dey, M., Longpre, S., … van der Wal, O. (2022). You Reap What You Sow: On the Challenges of Bias Evaluation Under Multilingual Settings. In 2022 Challenges and Perspectives in Creating Large Language Models, Proceedings of the Workshop (pp. 26–41). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.bigscience-1.3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free