MSSPQ: Multiple Semantic Structure-Preserving Quantization for Cross-Modal Retrieval

11Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cross-modal hashing is a hot issue in the multimedia community, which is to generate compact hash code from multimedia content for efficient cross-modal search. Two challenges, i.e., (1) How to efficiently enhance cross-modal semantic mining is essential for cross-modal hash code learning, and (2) How to combine multiple semantic correlations learning to improve the semantic similarity preserving, cannot be ignored. To this end, this paper proposed a novel end-to-end cross-modal hashing approach, named Multiple Semantic Structure-Preserving Quantization (MSSPQ) that is to integrate deep hashing model with multiple semantic correlation learning to boost hash learning performance. The multiple semantic correlation learning consists of inter-modal and intra-modal pairwise correlation learning and Cosine correlation learning, which can comprehensively capture cross-modal consistent semantics and realize semantic similarity preserving. Extensive experiments are conducted on three multimedia datasets, which confirms that the proposed method outperforms the baselines.

Cite

CITATION STYLE

APA

Zhu, L., Cai, L., Song, J., Zhu, X., Zhang, C., & Zhang, S. (2022). MSSPQ: Multiple Semantic Structure-Preserving Quantization for Cross-Modal Retrieval. In ICMR 2022 - Proceedings of the 2022 International Conference on Multimedia Retrieval (pp. 631–638). Association for Computing Machinery, Inc. https://doi.org/10.1145/3512527.3531417

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free