Progressive Translation: Improving Domain Robustness of Neural Machine Translation with Intermediate Sequences

0Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Previous studies show that intermediate supervision signals benefit various Natural Language Processing tasks. However, it is not clear whether there exist intermediate signals that benefit Neural Machine Translation (NMT). Borrowing techniques from Statistical Machine Translation, we propose intermediate signals which are intermediate sequences from the "source-like" structure to the "target-like" structure. Such intermediate sequences introduce an inductive bias that reflects a domain-agnostic principle of translation, which reduces spurious correlations that are harmful to out-of-domain generalisation. Furthermore, we introduce a full-permutation multi-task learning to alleviate the spurious causal relations from intermediate sequences to the target, which results from exposure bias. The Minimum Bayes Risk decoding algorithm is used to pick the best candidate translation from all permutations to further improve the performance. Experiments show that the introduced intermediate signals can effectively improve the domain robustness of NMT and reduces the amount of hallucinations on out-of-domain translation. Further analysis shows that our methods are especially promising in low-resource scenarios.

Cite

CITATION STYLE

APA

Wang, C., Liu, Y., & Lam, W. (2023). Progressive Translation: Improving Domain Robustness of Neural Machine Translation with Intermediate Sequences. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 9425–9439). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.601

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free