Cross-modal hashing is a hot issue in the multimedia community, which is to generate compact hash code from multimedia content for efficient cross-modal search. Two challenges, i.e., (1) How to efficiently enhance cross-modal semantic mining is essential for cross-modal hash code learning, and (2) How to combine multiple semantic correlations learning to improve the semantic similarity preserving, cannot be ignored. To this end, this paper proposed a novel end-to-end cross-modal hashing approach, named Multiple Semantic Structure-Preserving Quantization (MSSPQ) that is to integrate deep hashing model with multiple semantic correlation learning to boost hash learning performance. The multiple semantic correlation learning consists of inter-modal and intra-modal pairwise correlation learning and Cosine correlation learning, which can comprehensively capture cross-modal consistent semantics and realize semantic similarity preserving. Extensive experiments are conducted on three multimedia datasets, which confirms that the proposed method outperforms the baselines.
CITATION STYLE
Zhu, L., Cai, L., Song, J., Zhu, X., Zhang, C., & Zhang, S. (2022). MSSPQ: Multiple Semantic Structure-Preserving Quantization for Cross-Modal Retrieval. In ICMR 2022 - Proceedings of the 2022 International Conference on Multimedia Retrieval (pp. 631–638). Association for Computing Machinery, Inc. https://doi.org/10.1145/3512527.3531417
Mendeley helps you to discover research relevant for your work.