Cross-model retrieval with reconstruct hashing

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hashing has been widely used in large-scale vision problems thanks to its efficiency in both storage and speed. For fast cross-modal retrieval task, cross-modal hashing (CMH) has received increasing attention recently with its ability to improve quality of hash coding by exploiting the semantic correlation across different modalities. Most traditional CMH methods focus on designing a good hash function to use supervised information appropriately, but the performance are limited by hand-crafted features. Some deep learning based CMH methods focus on learning good features by using deep network, however, directly quantizing the feature may result in large loss for hashing. In this paper, we propose a novel end-to-end deep cross-modal hashing framework, integrating feature and hash-code learning into the same network. We keep the relationship of features between modalities. For hash process, we design a novel net structure and loss for hash learning as well as reconstruct the hash codes to features to improve the quality of codes. Experiments on standard databases for cross-modal retrieval show the proposed methods yields substantial boosts over latest state-of-the-art hashing methods.

Cite

CITATION STYLE

APA

Liu, Y., Yan, C., Bai, X., & Zhou, J. (2018). Cross-model retrieval with reconstruct hashing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11004 LNCS, pp. 386–394). Springer Verlag. https://doi.org/10.1007/978-3-319-97785-0_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free