Speaking Multiple Languages Affects the Moral Bias of Language Models

12Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Pre-trained multilingual language models (PMLMs) are commonly used when dealing with data from multiple languages and crosslingual transfer. However, PMLMs are trained on varying amounts of data for each language. In practice this means their performance is often much better on English than many other languages. We explore to what extent this also applies to moral norms. Do the models capture moral norms from English and impose them on other languages? Do the models exhibit random and thus potentially harmful beliefs in certain languages? Both these issues could negatively impact cross-lingual transfer and potentially lead to harmful outcomes. In this paper, we (1) apply the MORALDIRECTION framework to multilingual models, comparing results in German, Czech, Arabic, Chinese, and English, (2) analyse model behaviour on filtered parallel subtitles corpora, and (3) apply the models to a Moral Foundations Questionnaire, comparing with human responses from different countries. Our experiments demonstrate that PMLMs do encode differing moral biases, but these do not necessarily correspond to cultural differences or commonalities in human opinions. We release our code and models.

Cite

CITATION STYLE

APA

Hämmerl, K., Deiseroth, B., Schramowski, P., Libovický, J., Rothkopf, C. A., Fraser, A., & Kersting, K. (2023). Speaking Multiple Languages Affects the Moral Bias of Language Models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2137–2156). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.134

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free