Improving lexical choice in neural machine translation

49Citations
Citations of this article
201Readers
Mendeley users who have this article in their library.

Abstract

We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings.1.

Cite

CITATION STYLE

APA

Nguyen, T. Q., & Chiang, D. (2018). Improving lexical choice in neural machine translation. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 1, pp. 334–343). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-1031

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free