Improving Language Model Integration for Neural Machine Translation

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

The integration of language models for neural machine translation has been extensively studied in the past. It has been shown that an external language model, trained on additional target-side monolingual data, can help improve translation quality. However, there has always been the assumption that the translation model also learns an implicit target-side language model during training, which interferes with the external language model at decoding time. Recently, some works on automatic speech recognition have demonstrated that, if the implicit language model is neutralized in decoding, further improvements can be gained when integrating an external language model. In this work, we transfer this concept to the task of machine translation and compare with the most prominent way of including additional monolingual data - namely back-translation. We find that accounting for the implicit language model significantly boosts the performance of language model fusion, although this approach is still outperformed by back-translation.

Cite

CITATION STYLE

APA

Herold, C., Gao, Y., Zeineldeen, M., & Ney, H. (2023). Improving Language Model Integration for Neural Machine Translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 7114–7123). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.444

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free