Assessing Multilingual Fairness in Pre-trained Multimodal Representations

9Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Recently pre-trained multimodal models, such as CLIP (Radford et al., 2021), have shown exceptional capabilities towards connecting images and natural language. The textual representations in English can be desirably transferred to multilingualism and support downstream multimodal tasks for different languages. Nevertheless, the principle of multilingual fairness is rarely scrutinized: do multilingual multimodal models treat languages equally? Are their performances biased towards particular languages? To answer these questions, we view language as the fairness recipient and introduce two new fairness notions, multilingual individual fairness and multilingual group fairness, for pre-trained multimodal models. Multilingual individual fairness requires that text snippets expressing similar semantics in different languages connect similarly to images, while multilingual group fairness requires equalized predictive performance across languages. We characterize the extent to which pre-trained multilingual vision- and-language representations are individually fair across languages. However, extensive experiments demonstrate that multilingual representations do not satisfy group fairness: (1) there is a severe multilingual accuracy disparity issue; (2) the errors exhibit biases across languages conditioning the group of people in the images, including race, gender and age.

References Powered by Scopus

Deep residual learning for image recognition

174369Citations
N/AReaders
Get full text

Deep learning face attributes in the wild

6155Citations
N/AReaders
Get full text

Fairness through awareness

2390Citations
N/AReaders
Get full text

Cited by Powered by Scopus

T2IAT: Measuring Valence and Stereotypical Biases in Text-to-Image Generation

10Citations
N/AReaders
Get full text

Parameter-Efficient Cross-lingual Transfer of Vision and Language Models via Translation-based Alignment

2Citations
N/AReaders
Get full text

TIBET: Identifying and Evaluating Biases in Text-to-Image Generative Models

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wang, J., Liu, Y., & Wang, X. E. (2022). Assessing Multilingual Fairness in Pre-trained Multimodal Representations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2681–2695). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.211

Readers over time

‘21‘22‘23‘24‘2506121824

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 12

60%

Researcher 5

25%

Lecturer / Post doc 2

10%

Professor / Associate Prof. 1

5%

Readers' Discipline

Tooltip

Computer Science 17

77%

Medicine and Dentistry 2

9%

Linguistics 2

9%

Neuroscience 1

5%

Save time finding and organizing research with Mendeley

Sign up for free
0