Trainable greedy decoding for neural machine translation

35Citations
Citations of this article
242Readers
Mendeley users who have this article in their library.

Abstract

Recent research in neural machine translation has largely focused on two aspects; neural network architectures and end-to-end learning algorithms. The problem of decoding, however, has received relatively little attention from the research community. In this paper, we solely focus on the problem of decoding given a trained neural machine translation model. Instead of trying to build a new decoding algorithm for any specific decoding objective, we propose the idea of trainable decoding algorithm in which we train a decoding algorithm to find a translation that maximizes an arbitrary decoding objective. More specifically, we design an actor that observes and manipulates the hidden state of the neural machine translation decoder and propose to train it using a variant of deterministic policy gradient. We extensively evaluate the proposed algorithm using four language pairs and two decoding objectives, and show that we can indeed train a trainable greedy decoder that generates a better translation (in terms of a target decoding objective) with minimal computational overhead.

Cite

CITATION STYLE

APA

Gu, J., Cho, K., & Li, V. O. K. (2017). Trainable greedy decoding for neural machine translation. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1968–1978). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1210

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free