The NiuTrans Machine Translation Systems for WMT20

24Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.

Abstract

This paper describes NiuTrans neural machine translation systems of the WMT20 news translation tasks. We participated in Japanese↔English, English→Chinese, Inuktitut→English and Tamil→English total five tasks and rank first in Japanese↔English both sides. We mainly utilized iterative back-translation, different depth and widen model architectures, iterative knowledge distillation and iterative fine-tuning. And we find that adequately widened and deepened the model simultaneously, the performance will significantly improve. Also, iterative fine-tuning strategy we implemented is effective during adapting domain. For Inuktitut→English and Tamil→English tasks, we built multilingual models separately and employed pretraining word embedding to obtain better performance.

Cite

CITATION STYLE

APA

Zhang, Y., Wang, Z., Cao, R., Wei, B., Shan, W., Zhou, S., … Zhu, J. (2020). The NiuTrans Machine Translation Systems for WMT20. In 5th Conference on Machine Translation, WMT 2020 - Proceedings (pp. 338–345). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-5325

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free