Measuring and increasing context usage in context-aware machine translation

35Citations
Citations of this article
100Readers
Mendeley users who have this article in their library.

Abstract

Recent work in neural machine translation has demonstrated both the necessity and feasibility of using inter-sentential context - context from sentences other than those currently being translated. However, while many current methods present model architectures that theoretically can use this extra context, it is often not clear how much they do actually utilize it at translation time. In this paper, we introduce a new metric, conditional cross-mutual information, to quantify the usage of context by these models. Using this metric, we measure how much document-level machine translation systems use particular varieties of context. We find that target context is referenced more than source context, and that conditioning on a longer context has a diminishing effect on results. We then introduce a new, simple training method, context-aware word dropout, to increase the usage of context by context-aware models. Experiments show that our method increases context usage and that this reflects on the translation quality according to metrics such as BLEU and COMET, as well as performance on anaphoric pronoun resolution and lexical cohesion contrastive datasets.

Cite

CITATION STYLE

APA

Fernandes, P., Yin, K., Neubig, G., & Martins, A. F. T. (2021). Measuring and increasing context usage in context-aware machine translation. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 6467–6478). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.505

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free