CUNI system for the WMT17 multimodal traslation task

11Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we describe our submissions to the WMT17 Multimodal Translation Task. For Task 1 (multimodal translation), our best scoring system is a purely textual neural translation of the source image caption to the target language. The main feature of the system is the use of additional data that was acquired by selecting similar sentences from parallel corpora and by data synthesis with back-translation. For Task 2 (cross-lingual image captioning), our best submitted system generates an English caption which is then translated by the best system used in Task 1. We also present negative results, which are based on ideas that we believe have potential of making improvements, but did not prove to be useful in our particular setup.

Cite

CITATION STYLE

APA

Helcl, J., & Libovický, J. (2017). CUNI system for the WMT17 multimodal traslation task. In WMT 2017 - 2nd Conference on Machine Translation, Proceedings (pp. 450–457). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w17-4749

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free