Multi-Task learning for multiple language translation

480Citations
Citations of this article
414Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-Task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-Task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.

Cite

CITATION STYLE

APA

Dong, D., Wu, H., He, W., Yu, D., & Wang, H. (2015). Multi-Task learning for multiple language translation. In ACL-IJCNLP 2015 - 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Proceedings of the Conference (Vol. 1, pp. 1723–1732). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p15-1166

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free