Attention-based Multimodal Neural Machine Translation

144Citations
Citations of this article
167Readers
Mendeley users who have this article in their library.

Abstract

We present a novel neural machine translation (NMT) architecture associating visual and textual features for translation tasks with multiple modalities. Transformed global and regional visual features are concatenated with text to form attendable sequences which are dissipated over parallel long short-term memory (LSTM) threads to assist the encoder generating a representation for attention-based decoding. Experiments show that the proposed NMT outperform the text-only baseline.

Cite

CITATION STYLE

APA

Huang, P. Y., Liu, F., Shiang, S. R., Oh, J., & Dyer, C. (2016). Attention-based Multimodal Neural Machine Translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 639–645). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w16-2360

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free