Abstract
Image-text matching is central to visual-semantic cross-modal retrieval and has been attracting extensive attention recently. Previous studies have been devoted to finding the latent correspondence between image regions and words, e.g., connecting key words to specific regions of salient objects. However, existing methods are usually committed to handle concrete objects, rather than abstract ones, e.g., a description of some action, which in fact are also ubiquitous in description texts of real-world. The main challenge in dealing with abstract objects is that there is no explicit connections between them, unlike their concrete counterparts. One therefore has to alternatively find the implicit and intrinsic connections between them. In this paper, we propose a relation-wise dual attention network (RDAN) for image-text matching. Specifically, we maintain an over-complete set that contains pairs of regions and words. Then built upon this set, we encode the local correlations and the global dependencies between regions and words by training a visual-semantic network. Then a dual pathway attention network is presented to infer the visual-semantic alignments and image-text similarity. Extensive experiments validate the efficacy of our method, by achieving the state-of-the-art performance on several public benchmark datasets.
Cite
CITATION STYLE
Hu, Z., Luo, Y., Lin, J., Yan, Y. Y., & Chen, J. (2019). Multi-level visual-semantic alignments with relation-wise dual attention network for image and text matching. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 789–795). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/111
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.