Incremental decoding and training methods for simultaneous translation in neural machine translation

74Citations
Citations of this article
123Readers
Mendeley users who have this article in their library.

Abstract

We address the problem of simultaneous translation by modifying the Neural MT decoder to operate with dynamically built encoder and attention. We propose a tunable agent which decides the best segmentation strategy for a userdefined BLEU loss and Average Proportion (AP) constraint. Our agent outperforms previously proposed Wait-if-diff and Wait-if-worse agents (Cho and Esipova, 2016) on BLEU with a lower latency. Secondly we proposed datadriven changes to Neural MT training to better match the incremental decoding framework.

Cite

CITATION STYLE

APA

Dalvi, F., Sajjad, H., Vogel, S., & Durrani, N. (2018). Incremental decoding and training methods for simultaneous translation in neural machine translation. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 2, pp. 493–499). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-2079

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free