Non-autoregressive Neural Machine Translation with Distortion Model

3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Non-autoregressive translation (NAT) has attracted attention recently due to its high efficiency during inference. Unfortunately, it performs significantly worse than the autoregressive translation (AT) model. We observe that the gap between NAT and AT can be remarkably narrowed if we provide the inputs of the decoder in the same order as the target sentence. However, existing NAT models still initialize the decoding process by copying source inputs from left to right, and lack an explicit reordering mechanism for decoder inputs. To address this problem, we propose a novel distortion model to enhance the decoder inputs so as to further improve NAT models. The distortion model, incorporated into the NAT model, reorders the decoder inputs to close the word order of the decoder outputs, which can reduce the search space of the non-autoregressive decoder. We verify our approach empirically through a series of experiments on three similar language pairs (En De, En Ro, and De En) and two dissimilar language pairs (Zh En and En Ja). Quantitative and qualitative analyses demonstrate the effectiveness and universality of our proposed approach.

Cite

CITATION STYLE

APA

Zhou, L., Zhang, J., Zhao, Y., & Zong, C. (2020). Non-autoregressive Neural Machine Translation with Distortion Model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12430 LNAI, pp. 403–415). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60450-9_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free