Visual agreement regularized training for multi-modal machine translation

36Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

Multi-modal machine translation aims at translating the source sentence into a different language in the presence of the paired image. Previous work suggests that additional visual information only provides dispensable help to translation, which is needed in several very special cases such as translating ambiguous words. To make better use of visual information, this work presents visual agreement regularized training. The proposed approach jointly trains the source-totarget and target-to-source translation models and encourages them to share the same focus on the visual information when generating semantically equivalent visual words (e.g. "ball"in English and "ballon"in French). Besides, a simple yet effective multi-head co-attention model is also introduced to capture interactions between visual and textual features. The results show that our approaches can outperform competitive baselines by a large margin on the Multi30k dataset. Further analysis demonstrates that the proposed regularized training can effectively improve the agreement of attention on the image, leading to better use of visual information.

Cite

CITATION STYLE

APA

Yang, P., Chen, B., Zhang, P., & Sun, X. (2020). Visual agreement regularized training for multi-modal machine translation. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 9418–9425). AAAI press. https://doi.org/10.1609/aaai.v34i05.6484

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free