An efficient parallel strategy for matching visual self-similarities in large image databases

0Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Due to high interest of social online systems, there exists a huge and still increasing amount of image data in the web. In order to handle this massive amount of visual information, algorithms often need to be redesigned. In this work, we developed an efficient approach to find visual similarities between images that runs completely on GPU and is applicable to large image databases. Based on local self-similarity descriptors, the approach finds similarities even across modalities. Given a set of images, a database is created by storing all descriptors in an arrangement suitable for parallel GPU-based comparison. A novel voting-scheme further considers the spatial layout of descriptors with hardly any overhead. Thousands of images are searched in only a few seconds. We apply our algorithm to cluster a set of image responses to identify various senses of ambiguous words and re-tag similar images with missing tags. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Schwarz, K., Häußler, T., & Lensch, H. P. A. (2012). An efficient parallel strategy for matching visual self-similarities in large image databases. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7583 LNCS, pp. 281–290). Springer Verlag. https://doi.org/10.1007/978-3-642-33863-2_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free