Step-Wise Hierarchical Alignment Network for Image-Text Matching

57Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Image-text matching plays a central role in bridging the semantic gap between vision and language. The key point to achieve precise visual-semantic alignment lies in capturing the fine-grained cross-modal correspondence between image and text. Most previous methods rely on single-step reasoning to discover the visual-semantic interactions, which lacks the ability of exploiting the multi-level information to locate the hierarchical fine-grained relevance. Different from them, in this work, we propose a step-wise hierarchical alignment network (SHAN) that decomposes image-text matching into multi-step cross-modal reasoning process. Specifically, we first achieve local-to-local alignment at fragment level, following by performing global-to-local and global-to-global alignment at context level sequentially. This progressive alignment strategy supplies our model with more complementary and sufficient semantic clues to understand the hierarchical correlations between image and text. The experimental results on two benchmark datasets demonstrate the superiority of our proposed method.

Cite

CITATION STYLE

APA

Ji, Z., Chen, K., & Wang, H. (2021). Step-Wise Hierarchical Alignment Network for Image-Text Matching. In IJCAI International Joint Conference on Artificial Intelligence (pp. 765–771). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/106

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free