Sparse and constrained attention for neural machine translation

36Citations
Citations of this article
168Readers
Mendeley users who have this article in their library.

Abstract

In NMT, words are sometimes dropped from the source or generated repeatedly in the translation. We explore novel strategies to address the coverage problem that change only the attention transformation. Our approach allocates fertilities to source words, used to bound the attention each word can receive. We experiment with various sparse and constrained attention transformations and propose a new one, constrained sparsemax, shown to be differentiable and sparse. Empirical evaluation is provided in three languages pairs.

Cite

CITATION STYLE

APA

Malaviya, C., Ferreira, P., & Martins, A. F. T. (2018). Sparse and constrained attention for neural machine translation. In ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 2, pp. 370–376). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p18-2059

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free