Encouraging Neural Machine Translation to Satisfy Terminology Constraints

13Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

We present a new approach to encourage neural machine translation to satisfy lexical constraints. Our method acts at the training step and thereby avoiding the introduction of any extra computational overhead at inference step. The proposed method combines three main ingredients. The first one consists in augmenting the training data to specify the constraints. Intuitively, this encourages the model to learn a copy behavior when it encounters constraint terms. Compared to previous work, we use a simplified augmentation strategy without source factors. The second ingredient is constraint token masking, which makes it even easier for the model to learn the copy behavior and generalize better. The third one, is a modification of the standard cross entropy loss to bias the model towards assigning high probabilities to constraint words. Empirical results show that our method improves upon related baselines in terms of both BLEU score and the percentage of generated constraint terms.

Cite

CITATION STYLE

APA

Ailem, M., Liu, J., & Qader, R. (2021). Encouraging Neural Machine Translation to Satisfy Terminology Constraints. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1450–1455). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.125

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free