Results of the WMT17 neural MT training task

23Citations
Citations of this article
78Readers
Mendeley users who have this article in their library.

Abstract

This paper presents the results of the WMT17 Neural MT Training Task. The objective of this task is to explore the methods of training a fixed neural architecture, aiming primarily at the best translation quality and, as a secondary goal, shorter training time. Task participants were provided with a complete neural machine translation system, fixed training data and the configuration of the network. The translation was performed in the English-to-Czech direction and the task was divided into two subtasks of different configurations-one scaled to fit on a 4GB and another on an 8GB GPU card. We received 3 submissions for the 4GB variant and 1 submission for the 8GB variant; we provided also our run for each of the sizes and two baselines. We translated the test set with the trained models and evaluated the outputs using several automatic metrics. We also report results of the human evaluation of the submitted systems.

Cite

CITATION STYLE

APA

Bojar, O., Helcl, J., Kocmi, T., Libovický, J., & Musil, T. (2017). Results of the WMT17 neural MT training task. In WMT 2017 - 2nd Conference on Machine Translation, Proceedings (pp. 525–533). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-4757

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free