On the Language Coverage Bias for Neural Machine Translation

15Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

Language coverage bias, which indicates the content-dependent differences between sentence pairs originating from the source and target languages, is important for neural machine translation (NMT) because the target-original training data is not well exploited in current practice. By carefully designing experiments, we provide comprehensive analyses of the language coverage bias in the training data, and find that using only the source-original data achieves comparable performance with using full training data. Based on these observations, we further propose two simple and effective approaches to alleviate the language coverage bias problem through explicitly distinguishing between the source- and target-original training data, which consistently improve the performance over strong baselines on six WMT20 translation tasks. Complementary to the translationese effect, language coverage bias provides another explanation for the performance drop caused by back-translation (Marie et al., 2020). We also apply our approach to both back- and forward-translation and find that mitigating the language coverage bias can improve the performance of both the two representative data augmentation methods and their tagged variants (Caswell et al., 2019).

Cite

CITATION STYLE

APA

Wang, S., Tu, Z., Tan, Z., Shi, S., Sun, M., & Liu, Y. (2021). On the Language Coverage Bias for Neural Machine Translation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 4778–4790). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.422

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free