Neural Machine Translation Techniques for Named Entity Transliteration

24Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.

Abstract

Transliterating named entities from one language into another can be approached as neural machine translation (NMT) problem, for which we use deep attentional RNN encoder-decoder models. To build a strong transliteration system, we apply well-established techniques from NMT, such as dropout regularization, model ensembling, rescoring with right-to-left models, and back-translation. Our submission to the NEWS 2018 Shared Task on Named Entity Transliteration ranked first in several tracks.

Cite

CITATION STYLE

APA

Grundkiewicz, R., & Heafield, K. (2018). Neural Machine Translation Techniques for Named Entity Transliteration. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 89–94). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w18-2413

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free