Maximum expected likelihood estimation for zero-resource neural machine translation

36Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

While neural machine translation (NMT) has made remarkable progress in translating a handful of resource-rich language pairs recently, parallel corpora are not always readily available for most language pairs. To deal with this problem, we propose an approach to zero-resource NMT via maximum expected likelihood estimation. The basic idea is to maximize the expectation with respect to a pivot-to-source translation model for the intended source-to-target model on a pivot-target parallel corpus. To approximate the expectation, we propose two methods to connect the pivot-to-source and source-to-target models. Experiments on two zero-resource language pairs show that the proposed approach yields substantial gains over baseline methods. We also observe that when trained jointly with the source-to-target model, the pivotto-source translation model also obtains improvements over independent training.

Cite

CITATION STYLE

APA

Zheng, H., Cheng, Y., & Liu, Y. (2017). Maximum expected likelihood estimation for zero-resource neural machine translation. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 4251–4257). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/594

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free