Integrating Translation Memories into Non-Autoregressive Machine Translation

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Non-autoregressive machine translation (NAT) has recently made great progress. However, most works to date have focused on standard translation tasks, even though some edit-based NAT models, such as the Levenshtein Transformer (LevT), seem well suited to translate with a Translation Memory (TM). This is the scenario considered here. We first analyze the vanilla LevT model and explain why it does not do well in this setting. We then propose a new variant, TM-LevT, and show how to effectively train this model. By modifying the data presentation and introducing an extra deletion operation, we obtain performance that are on par with an autoregressive approach, while reducing the decoding load. We also show that incorporating TMs during training dispenses to use knowledge distillation, a well-known trick used to mitigate the multimodality issue.

Cite

CITATION STYLE

APA

Xu, J., Crego, J., & Yvon, F. (2023). Integrating Translation Memories into Non-Autoregressive Machine Translation. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 1318–1330). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.96

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free