Exploiting source-side monolingual data in neural machine translation

236Citations
Citations of this article
194Readers
Mendeley users who have this article in their library.

Abstract

Neural Machine Translation (NMT) based on the encoder-decoder architecture has recently become a new paradigm. Researchers have proven that the target-side monolingual data can greatly enhance the decoder model of NMT. However, the source-side monolingual data is not fully explored although it should be useful to strengthen the encoder model of NMT, especially when the parallel corpus is far from sufficient. In this paper, we propose two approaches to make full use of the source-side monolingual data in NMT. The first approach employs the self-learning algorithm to generate the synthetic large-scale parallel data for NMT training. The second approach applies the multi-task learning framework using two NMTs to predict the translation and the reordered source-side monolingual sentences simultaneously. The extensive experiments demonstrate that the proposed methods obtain significant improvements over the strong attention-based NMT.

Cite

CITATION STYLE

APA

Zhang, J., & Zong, C. (2016). Exploiting source-side monolingual data in neural machine translation. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1535–1545). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1160

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free