Reusing a pretrained language model on languages with limited corpora for unsupervised NMT

N/ACitations
Citations of this article
105Readers
Mendeley users who have this article in their library.

Abstract

Using a language model (LM) pretrained on two languages with large monolingual data in order to initialize an unsupervised neural machine translation (UNMT) system yields state-of-the-art results. When limited data is available for one language, however, this method leads to poor translations. We present an effective approach that reuses an LM that is pretrained only on a high-resource language. The monolingual LM is fine-tuned on both languages and is then used to initialize a UNMT model. To reuse the pretrained LM, we have to modify its predefined vocabulary, to account for the new language. We therefore propose a novel vocabulary extension method. Our approach, RE-LM, outperforms a competitive cross-lingual pretraining model (XLM) in English-Macedonian (En-Mk) and English-Albanian (En-Sq), yielding more than +8.3 BLEU points for all four translation directions.

Cite

CITATION STYLE

APA

Chronopoulou, A., Stojanovski, D., & Fraser, A. (2020). Reusing a pretrained language model on languages with limited corpora for unsupervised NMT. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 2703–2711). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.214

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free