Speeding up neural machine translation decoding by cube pruning

5Citations
Citations of this article
116Readers
Mendeley users who have this article in their library.

Abstract

Although neural machine translation has achieved promising results, it suffers from slow translation speed. The direct consequence is that a trade-off has to be made between translation quality and speed, thus its performance can not come into full play. We apply cube pruning, a popular technique to speed up dynamic programming, into neural machine translation to speed up the translation. To construct the equivalence class, similar target hidden states are combined, leading to less RNN expansion operations on the target side and less softmax operations over the large target vocabulary. The experiments show that, at the same or even better translation quality, our method can translate faster compared with naive beam search by 3.3× on GPUs and 3.5× on CPUs.

Cite

CITATION STYLE

APA

Zhang, W., Huang, L., Feng, Y., Shen, L., & Liu, Q. (2018). Speeding up neural machine translation decoding by cube pruning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 4284–4294). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1460

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free