Cost Weighting for Neural Machine Translation Domain Adaptation

45Citations
Citations of this article
167Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose a new domain adaptation technique for neural machine translation called cost weighting, which is appropriate for adaptation scenarios in which a small in-domain data set and a large general-domain data set are available. Cost weighting incorporates a domain classifier into the neural machine translation training algorithm, using features derived from the encoder representation in order to distinguish in-domain from out-of-domain data. Classifier probabilities are used to weight sentences according to their domain similarity when updating the parameters of the neural translation model. We compare cost weighting to two traditional domain adaptation techniques developed for statistical machine translation: data selection and sub-corpus weighting. Experiments on two large-data tasks show that both the traditional techniques and our novel proposal lead to significant gains, with cost weighting outperforming the traditional methods.

Cite

CITATION STYLE

APA

Chen, B., Cherry, C., Foster, G., & Larkin, S. (2017). Cost Weighting for Neural Machine Translation Domain Adaptation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 40–46). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-3205

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free