Probing Pre-Trained Language Models for Cross-Cultural Differences in Values

4Citations
Citations of this article
65Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Language embeds information about social, cultural, and political values people hold. Prior work has explored potentially harmful social biases encoded in Pre-trained Language Models (PLMs). However, there has been no systematic study investigating how values embedded in these models vary across cultures. In this paper, we introduce probes to study which cross-cultural values are embedded in these models, and whether they align with existing theories and cross-cultural values surveys. We find that PLMs capture differences in values across cultures, but those only weakly align with established values surveys. We discuss implications of using mis-aligned models in cross-cultural settings, as well as ways of aligning PLMs with values surveys.

Cite

CITATION STYLE

APA

Arora, A., Kaffee, L. A., & Augenstein, I. (2023). Probing Pre-Trained Language Models for Cross-Cultural Differences in Values. In Cross-Cultural Considerations in NLP at EACL, C3NLP 2023 - Proceedings of the Workshop (pp. 114–130). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.c3nlp-1.12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free