Multi-View Visual Semantic Embedding

35Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Visual Semantic Embedding (VSE) is a dominant method for vision-language retrieval. Its purpose is to learn an embedding space so that visual data can be embedded in a position close to the corresponding text description. However, there are large intra-class variations in the vision-language data. For example, multiple texts describing the same image may be described from different views, and the descriptions of different views are often dissimilar. The mainstream VSE method embeds samples from the same class in similar positions, which will suppress intra-class variations and lead to inferior generalization performance. This paper proposes a Multi-View Visual Semantic Embedding (MV-VSE) framework, which learns multiple embeddings for one visual data and explicitly models intra-class variations. To optimize MV-VSE, a multi-view upper bound loss is proposed, and the multi-view embeddings are jointly optimized while retaining intra-class variations. MV-VSE is plug- and-play and can be applied to various VSE models and loss functions without excessively increasing model complexity. Experimental results on the Flickr30K and MS-COCO datasets demonstrate the superior performance of our framework.

Cite

CITATION STYLE

APA

Li, Z., Guo, C., Feng, Z., Hwang, J. N., & Xue, X. (2022). Multi-View Visual Semantic Embedding. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1130–1136). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/158

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free