Supervised hashing methods have attracted much attention in these years. However, most existing supervised hashing algorithms have some of the following problems. First, most of them leverage the pairwise similarity matrix, whose size is quadratic to the number of training samples, to supervise the learning of hash codes. Thus, they are not scalable when dealing with large data. Second, most of them relax the discrete constraints for easy optimization and then quantize the learnt real-valued solution to binary hash codes. Therefore, the quantization error caused by the relaxation may lead to a decline of retrieval performance. To address these issues and make the supervised method scalable to large datasets, we present a novel hashing method, named Scalable Supervised Discrete Hashing (SSDH). Specifically, based on a new loss function, SSDH bypasses the direct optimization on the n by n pairwise similarity matrix. In addition, SSDH adopts no relaxation optimization scheme in the learning procedure and avoids the large quantization error problem. Moreover, during learning, it leverages both the pairwise similarity matrix and label matrix; thus, more semantic information can be embedded to the learning of hash codes. Extensive experiments are conducted on six benchmark datasets including two large-scale datasets, i.e., NUS-WIDE and ImageNet. The results show that SSDH can outperform state-of-the-art baselines on these datasets, demonstrating its effectiveness and efficiency.
CITATION STYLE
Luo, X., Wu, Y., & Xu, X. S. (2018). Scalable supervised discrete hashing for large-scale search. In The Web Conference 2018 - Proceedings of the World Wide Web Conference, WWW 2018 (pp. 1603–1612). Association for Computing Machinery, Inc. https://doi.org/10.1145/3178876.3186072
Mendeley helps you to discover research relevant for your work.