Visual relations augmented cross-modal retrieval

18Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Retrieving relevant samples across multiple-modalities is a primary topic that receives consistently research interests in multimedia communities, and has benefited various real-world multimedia applications (e.g., text-based image searching). Current models mainly focus on learning a unified visual semantic embedding space to bridge visual contents & text query, targeting at aligning relevant samples from different modalities as neighbors in the embedding space. However, these models did not consider relations between visual components in learning visual representations, resulting in their incapability of distinguishing images with the same visual components but different relations (i.e., Figure 1). To precisely modeling visual contents, we introduce a novel framework that enhanced visual representation with relations between components. Specifically, visual relations are represented by the scene graph extracted from an image, then encoded by the graph convolutional neural networks for learning visual relational features. We combine the relational and compositional representation together for image-text retrieval. Empirical results conducted on the challenging MS-COCO and Flicker 30K datasets demonstrate the effectiveness of our proposed model for cross-modal retrieval task.

Cite

CITATION STYLE

APA

Guo, Y., Chen, J., Zhang, H., & Jiang, Y. G. (2020). Visual relations augmented cross-modal retrieval. In ICMR 2020 - Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 9–15). Association for Computing Machinery, Inc. https://doi.org/10.1145/3372278.3390709

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free