Multilingual Neural Machine Translation approaches are based on the use of task-specific models and the addition of one more language can only be done by retraining the whole system. In this work, we propose a new training schedule that allows the system to scale to more languages without modification of the previous components based on joint training and language-independent encoder/ decoder modules allowing for zero-shot translation. This work in progress shows close results to the state-of-the-art in the WMT task.
CITATION STYLE
Escolano, C., Costa-Jussa, M. R., & Fonollosa, J. A. R. (2019). From bilingual to multilingual neural machine translation by incremental training. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Student Research Workshop (pp. 236–242). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-2033
Mendeley helps you to discover research relevant for your work.