DEEP: DEnoising Entity Pre-training for Neural Machine Translation

24Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.

Abstract

It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus. Earlier named entity translation methods mainly focus on phonetic transliteration, which ignores the sentence context for translation and is limited in domain and language coverage. To address this limitation, we propose DEEP, a DEnoising Entity Pretraining method that leverages large amounts of monolingual data and a knowledge base to improve named entity translation accuracy within sentences. Besides, we investigate a multi-task learning strategy that finetunes a pre-trained neural machine translation model on both entity-augmented monolingual data and parallel data to further improve entity translation. Experimental results on three language pairs demonstrate that DEEP results in significant improvements over strong denoising auto-encoding baselines, with a gain of up to 1.3 BLEU and up to 9.2 entity accuracy points for English-Russian translation.

Cite

CITATION STYLE

APA

Hu, J., Hayashi, H., Cho, K., & Neubig, G. (2022). DEEP: DEnoising Entity Pre-training for Neural Machine Translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 1753–1766). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.123

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free