Equally-guided discriminative hashing for cross-modal retrieval

58Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cross-modal hashing intends to project data from two modalities into a common hamming space to perform cross-modal retrieval efficiently. Despite satisfactory performance achieved on real applications, existing methods are incapable of effectively preserving semantic structure to maintain inter-class relationship and improving discriminability to make intra-class samples aggregated simultaneously, which thus limits the higher retrieval performance. To handle this problem, we propose Equally-Guided Discriminative Hashing (EGDH), which jointly takes into consideration semantic structure and discriminability. Specifically, we discover the connection between semantic structure preserving and discriminative methods. Based on it, we directly encode multi-label annotations that act as high-level semantic features to build a common semantic structure preserving classifier. With the common classifier to guide the learning of different modal hash functions equally, hash codes of samples are intra-class aggregated and inter-class relationship preserving. Experimental results on two benchmark datasets demonstrate the superiority of EGDH compared with the state-of-the-arts.

Cite

CITATION STYLE

APA

Shi, Y., You, X., Zheng, F., Wang, S., & Peng, Q. (2019). Equally-guided discriminative hashing for cross-modal retrieval. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 4767–4773). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/662

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free