Cross-modal hashing has been studied extensively in the past decades for its significant advantage in computation and storage cost. For heterogeneous data points, the cross-modal hashing aims at learning a sharing Hamming space in where one query from one modality can retrieve relevant items of another modality. Although the cross-modal hashing method has achieved significant progress, there are some limitations that need to be further solved. First, to leverage the semantic information in hash codes, most of them learn hash codes from a similarity matrix, which is constructed by class labels directly, ignoring the fact that the class labels may contain noises in the real world. Second, most of them relax the discrete constraint on hash codes, which may cause large quantization error and inevitably results in suboptimal performance. To address the above issues, we propose a discrete robust supervised hashing (DRSH) algorithm in this paper. Specifically, both the class labels and features from different modalities are first fused to learn a robust similarity matrix through low-rank constraint that can reveal its structure and capture the noises in it. And then, hash codes are generated by preserving the robust similarity matrix-based similarities in the sharing Hamming space. The optimization is challenging due to the discrete constraint on hash codes. Finally, a discrete optimal algorithm is proposed to address this issue. We evaluate the DRSH on three real-world datasets, and the results demonstrate the superiority of DRSH over several existing hashing methods.
CITATION STYLE
Yao, T., Zhang, Z., Yan, L., Yue, J., & Tian, Q. (2019). Discrete Robust Supervised Hashing for Cross-Modal Retrieval. IEEE Access, 7, 39806–39814. https://doi.org/10.1109/ACCESS.2019.2897249
Mendeley helps you to discover research relevant for your work.