Correcting Length Bias in Neural Machine Translation

89Citations
Citations of this article
160Readers
Mendeley users who have this article in their library.

Abstract

We study two problems in neural machine translation (NMT). First, in beam search, whereas a wider beam should in principle help translation, it often hurts NMT. Second, NMT has a tendency to produce translations that are too short. Here, we argue that these problems are closely related and both rooted in label bias. We show that correcting the brevity problem almost eliminates the beam problem; we compare some commonly-used methods for doing this, finding that a simple per-word reward works well; and we introduce a simple and quick way to tune this reward using the perceptron algorithm.

Cite

CITATION STYLE

APA

Murray, K., & Chiang, D. (2018). Correcting Length Bias in Neural Machine Translation. In WMT 2018 - 3rd Conference on Machine Translation, Proceedings of the Conference (Vol. 1, pp. 212–223). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-6322

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free