Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation

N/ACitations
Citations of this article
80Readers
Mendeley users who have this article in their library.

Abstract

The data scarcity in low-resource languages has become a bottleneck to building robust neural machine translation systems. Fine-tuning a multilingual pre-trained model (e.g., mBART (Liu et al., 2020a)) on the translation task is a good approach for low-resource languages; however, its performance will be greatly limited when there are unseen languages in the translation pairs. In this paper, we present a continual pre-training (CPT) framework on mBART to effectively adapt it to unseen languages. We first construct noisy mixed-language text from the monolingual corpus of the target language in the translation pair to cover both the source and target languages, and then, we continue pretraining mBART to reconstruct the original monolingual text. Results show that our method can consistently improve the finetuning performance upon the mBART baseline, as well as other strong baselines, across all tested low-resource translation pairs containing unseen languages. Furthermore, our approach also boosts the performance on translation pairs where both languages are seen in the original mBART's pre-training. The code is available at https://github.com/zliucr/cpt-nmt.

Cite

CITATION STYLE

APA

Liu, Z., Winata, G. I., & Fung, P. (2021). Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 2706–2718). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.239

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free