When and why is document-level context useful in neural machine translation?

78Citations
Citations of this article
113Readers
Mendeley users who have this article in their library.

Abstract

Document-level context has received lots of attention for compensating neural machine translation (NMT) of isolated sentences. However, recent advances in document-level NMT focus on sophisticated integration of the context, explaining its improvement with only a few selected examples or targeted test sets. We extensively quantify the causes of improvements by a document-level model in general test sets, clarifying the limit of the usefulness of document-level context in NMT. We show that most of the improvements are not interpretable as utilizing the context. We also show that a minimal encoding is sufficient for the context modeling and very long context is not helpful for NMT.

Cite

CITATION STYLE

APA

Kim, Y., Tran, D. T., & Ney, H. (2019). When and why is document-level context useful in neural machine translation? In DiscoMT@EMNLP 2019 - Proceedings of the 4th Workshop on Discourse in Machine Translation (pp. 24–34). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d19-6503

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free