Multi-task neural model for agglutinative language translation

5Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

Neural machine translation (NMT) has achieved impressive performance recently by using large-scale parallel corpora. However, it struggles in the low-resource and morphologically-rich scenarios of agglutinative language translation task. Inspired by the finding that monolingual data can greatly improve the NMT performance, we propose a multi-task neural model that jointly learns to perform bi-directional translation and agglutinative language stemming. Our approach employs the shared encoder and decoder to train a single model without changing the standard NMT architecture but instead adding a token before each source-side sentence to specify the desired target outputs of the two different tasks. Experimental results on Turkish-English and Uyghur-Chinese show that our proposed approach can significantly improve the translation performance on agglutinative languages by using a small amount of monolingual data.

Cite

CITATION STYLE

APA

Pan, Y., Li, X., Yang, Y., & Dong, R. (2020). Multi-task neural model for agglutinative language translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 103–110). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.acl-srw.15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free