Neurons merging layer: Towards progressive redundancy reduction for deep supervised hashing

6Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

Abstract

Deep supervised hashing has become an active topic in information retrieval. It generates hashing bits by the output neurons of a deep hashing network. During binary discretization, there often exists much redundancy between hashing bits that degenerates retrieval performance in terms of both storage and accuracy. This paper proposes a simple yet effective Neurons Merging Layer (NMLayer) for deep supervised hashing. A graph is constructed to represent the redundancy relationship between hashing bits that is used to guide the learning of a hashing network. Specifically, it is dynamically learned by a novel mechanism defined in our active and frozen phases. According to the learned relationship, the NMLayer merges the redundant neurons together to balance the importance of each output neuron. Moreover, multiple NMLayers are progressively trained for a deep hashing network to learn a more compact hashing code from a long redundant code. Extensive experiments on four datasets demonstrate that our proposed method outperforms state-of-the-art hashing methods.

Cite

CITATION STYLE

APA

Fu, C., Song, L., Wu, X., Wang, G., & He, R. (2019). Neurons merging layer: Towards progressive redundancy reduction for deep supervised hashing. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 2322–2328). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/322

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free