Analyzing Gender Translation Errors to Identify Information Flows between the Encoder and Decoder of an NMT System

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Multiple studies have shown that existing NMT systems demonstrate some kind of gender bias. As a result, MT output appears to err more often for feminine forms and to amplify social gender misrepresentations, which is potentially harmful to users and practioners of these technologies. This paper continues this line of investigations and reports results obtained with a new test set in strictly controlled conditions. This setting allows us to better understand the multiple inner mechanisms that are causing these biases, which include the linguistic expressions of gender, the unbalanced distribution of masculine and feminine forms in the language, the modelling of morphological variation and the training process dynamics. To counterbalance these effects, we formulate several proposals and notably show that modifying the training loss can effectively mitigate such biases.

Cite

CITATION STYLE

APA

Wisniewski, G., Zhu, L., Ballier, N., & Yvon, F. (2022). Analyzing Gender Translation Errors to Identify Information Flows between the Encoder and Decoder of an NMT System. In BlackboxNLP 2022 - BlackboxNLP Analyzing and Interpreting Neural Networks for NLP, Proceedings of the Workshop (pp. 153–163). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.blackboxnlp-1.13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free