Contrastive Adversarial Training for Multi-Modal Machine Translation

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The multi-modal machine translation task is to improve translation quality with the help of additional visual input. It is expected to disambiguate or complement semantics while there are ambiguous words or incomplete expressions in the sentences. Existing methods have tried many ways to fuse visual information into text representations. However, only a minority of sentences need extra visual information as complementary. Without guidance, models tend to learn text-only translation from the major well-aligned translation pairs. In this article, we propose a contrastive adversarial training approach to enhance visual participation in semantic representation learning. By contrasting multi-modal input with the adversarial samples, the model learns to identify the most informed sample that is coupled with a congruent image and several visual objects extracted from it. This approach can prevent the visual information from being ignored and further fuse cross-modal information. We examine our method in three multi-modal language pairs. Experimental results show that our model is capable of improving translation accuracy. Further analysis shows that our model is more sensitive to visual information.

Cite

CITATION STYLE

APA

Huang, X., Zhang, J., & Zong, C. (2023). Contrastive Adversarial Training for Multi-Modal Machine Translation. ACM Transactions on Asian and Low-Resource Language Information Processing, 22(6). https://doi.org/10.1145/3587267

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free