Widening the representation bottleneck in neural machine translation with lexical shortcuts

5Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.

Abstract

The transformer is a state-of-the-art neural translation model that uses attention to iteratively refine lexical representations with information drawn from the surrounding context. Lexical features are fed into the first layer and propagated through a deep network of hidden layers. We argue that the need to represent and propagate lexical features in each layer limits the model's capacity for learning and representing other information relevant to the task. To alleviate this bottleneck, we introduce gated shortcut connections between the embedding layer and each subsequent layer within the encoder and decoder. This enables the model to access relevant lexical content dynamically, without expending limited resources on storing it within intermediate states. We show that the proposed modification yields consistent improvements over a baseline transformer on standard WMT translation tasks in 5 translation directions (0.9 BLEU on average) and reduces the amount of lexical information passed along the hidden layers. We furthermore evaluate different ways to integrate lexical connections into the transformer architecture and present ablation experiments exploring the effect of proposed shortcuts on model behavior.

Cite

CITATION STYLE

APA

Emelin, D., Titov, I., & Sennrich, R. (2019). Widening the representation bottleneck in neural machine translation with lexical shortcuts. In WMT 2019 - 4th Conference on Machine Translation, Proceedings of the Conference (Vol. 1, pp. 102–115). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-5211

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free