Semantics Consistent Adversarial Cross-Modal Retrieval

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cross-modal retrieval returns the relevant results from the other modalities given a query from one modality. The main challenge of cross-modal retrieval is the “heterogeneity gap” amongst modalities, because different modalities have different distributions and representations. Therefore, the similarity of different modalities can not be measured directly. In this paper, we propose a semantics consistent adversarial cross-modal retrieval approach, which learns a semantics consistent representation for different modalities with same semantic category. Specifically, we encourage the class center of different modalities with same semantic label to be as close as possible, and also minimize the distances between the samples and the class center with same semantic label from different modalities. Comprehensive experiments on Wikipedia dataset are conducted and the experimental results show the efficiency and effectiveness of our approach in cross-modal retrieval.

Cite

CITATION STYLE

APA

Xuan, R., Ou, W., Zhou, Q., Cao, Y., Yang, H., Xiong, X., & Ruan, F. (2020). Semantics Consistent Adversarial Cross-Modal Retrieval. In Studies in Computational Intelligence (Vol. 810, pp. 463–472). Springer Verlag. https://doi.org/10.1007/978-3-030-04946-1_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free