Image Retrieval Using a Deep Attention-Based Hash

12Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Image retrieval is becoming more and more important due to the rapid increase of the number of images on the web. To improve the efficiency of computing the similarity of images, hashing has moved into the focus of research. This paper proposes a Deep Attention-based Hash (DAH) retrieval model, which combines an attention module and a convolutional neural network to obtain hash codes with strong representability. Our DAH has the following features: The Hamming distance between the hash codes generated by similar images is small and the Hamming distance of hash codes of dissimilar images has a larger constant value. The quantitative loss from Euclidean distance to Hamming distance is minimized. DAH has a high image retrieval precision: We thoroughly compare it with ten state-of-the-art approaches on the CIFAR-10 dataset. The results show that the Mean Average Precision (MAP) of DAH reaches more than 92% in terms of 12, 24, 36 and 48 bit hash codes on CIFAR-10, which is better than what the state-of- art methods used for comparison can deliver.

Cite

CITATION STYLE

APA

Li, X., Xu, M., Xu, J., Weise, T., Zou, L., Sun, F., & Wu, Z. (2020). Image Retrieval Using a Deep Attention-Based Hash. IEEE Access, 8, 142229–142242. https://doi.org/10.1109/ACCESS.2020.3011102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free