Machine translation typically adopts an encoder-to-decoder framework, in which the decoder generates the target sentence word-by-word in an auto-regressive manner. However, the auto-regressive decoder faces a deep-rooted one-pass issue whereby each generated word is considered as one element of the final output regardless of whether it is correct or not. These generated wrong words further constitute the target historical context to affect the generation of subsequent target words. This paper proposes a novel synchronous refinement method to revise potential errors in the generated words by considering part of the target future context. Particularly, the proposed approach allows the auto-regressive decoder to refine the previously generated target words and generate the next target word synchronously. The experimental results on three widely-used machine translation tasks demonstrated the effectiveness of the proposed approach.
CITATION STYLE
Chen, K., Utiyama, M., Sumita, E., Wang, R., & Zhang, M. (2022). Synchronous Refinement for Neural Machine Translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2986–2996). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.235
Mendeley helps you to discover research relevant for your work.