Discrete sparse hashing for cross-modal similarity search

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cross-modal hashing approaches have achieved great success on cross-modal similarity search. However, most existing cross-modal hashing methods relax the discrete constraints to solve the hashing model and determine the weights of different modalities manually, which can significantly degrade the performance of retrieval. Besides, they are sensitive to noises because of the widely-utilized l2 -norm loss function. To address above problems, in this paper, a novel hashing method is proposed to efficiently learn unified binary codes, namely Discrete Sparse Hashing (DSH). In DSH model, unified hash codes are directly learned by discrete sparse coding in sharing low-dimensional latent space for different modalities, where the large quantization error is avoided and the learned codes are robust owing to the sparsity of binary codes. Moreover, the weights of different modalities are adaptively adjusted for training data. Extensive experiments on three databases demonstrate superior performance of DSH over most state-of-the-art methods.

Cite

CITATION STYLE

APA

Wang, L., Ma, C., Tu, E., Yang, J., & Kasabov, N. (2018). Discrete sparse hashing for cross-modal similarity search. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11304 LNCS, pp. 256–267). Springer Verlag. https://doi.org/10.1007/978-3-030-04212-7_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free