Backpropagation-Based Decoding for Multimodal Machine Translation

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

People are able to describe images using thousands of languages, but languages share only one visual world. The aim of this work is to use the learned intermediate visual representations from a deep convolutional neural network to transfer information across languages for which paired data is not available in any form. Our work proposes using backpropagation-based decoding coupled with transformer-based multilingual-multimodal language models in order to obtain translations between any languages used during training. We particularly show the capabilities of this approach in the translation of German-Japanese and Japanese-German sentence pairs, given a training data of images freely associated with text in English, German, and Japanese but for which no single image contains annotations in both Japanese and German. Moreover, we demonstrate that our approach is also generally useful in the multilingual image captioning task when sentences in a second language are available at test time. The results of our method also compare favorably in the Multi30k dataset against recently proposed methods that are also aiming to leverage images as an intermediate source of translations.

Cite

CITATION STYLE

APA

Yang, Z., Pinto-Alva, L., Dernoncourt, F., & Ordonez, V. (2022). Backpropagation-Based Decoding for Multimodal Machine Translation. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.736722

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free