Cross-lingual Transfer Can Worsen Bias in Sentiment Analysis

6Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sentiment analysis (SA) systems are widely deployed in many of the world's languages, and there is well-documented evidence of demographic bias in these systems. In languages beyond English, scarcer training data is often supplemented with transfer learning using pretrained models, including multilingual models trained on other languages. In some cases, even supervision data comes from other languages. Does cross-lingual transfer also import new biases? To answer this question, we use counterfactual evaluation to test whether gender or racial biases are imported when using cross-lingual transfer, compared to a monolingual transfer setting. Across five languages, we find that systems using cross-lingual transfer usually become more biased than their monolingual counterparts. We also find racial biases to be much more prevalent than gender biases. To spur further research on this topic, we release the sentiment models we used for this study, and the intermediate checkpoints throughout training, yielding 1,525 distinct models; we also release our evaluation code.

Cite

CITATION STYLE

APA

Goldfarb-Tarrant, S., Ross, B., & Lopez, A. (2023). Cross-lingual Transfer Can Worsen Bias in Sentiment Analysis. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 5691–5704). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.346

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free