Learning to hash on structured data

6Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

Hashing techniques have been widely applied for large scale similarity search problems due to the computational and memory efficiency. However, most existing hashing methods assume data examples are independently and identically distributed. But there often exists various additional dependency/structure information between data examples in many real world applications. Ignoring this structure information may limit the performance of existing hashing algorithms. This paper explores the research problem of learning to Hash on Structured Data (HSD) and formulates a novel framework that considers additional structure information. In particular, the hashing function is learned in a unified learning framework by simultaneously ensuring the structural consistency and preserving the similarities between data examples. An iterative gradient descent algorithm is designed as the optimization procedure. Furthermore, we improve the effectiveness of hashing function through orthogonal transformation by minimizing the quantization error. Experimental results on two datasets clearly demonstrate the advantages of the proposed method over several state-of-the-art hashing methods.

Cite

CITATION STYLE

APA

Wang, Q., Si, L., & Shen, B. (2015). Learning to hash on structured data. In Proceedings of the National Conference on Artificial Intelligence (Vol. 4, pp. 3066–3072). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9557

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free