Interactive-time similarity search for large image collections using parallel VA-files

5Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In digital libraries, nearest-neighbor search (NN-search) plays a key role for content-based retrieval over multimedia objects. However, performance of existing NN-search techniques is not satisfactory with large collections and with high-dimensional representations of the objects. To obtain response times that are interactive, we pursue the following approach: it uses a linear algorithm that works with approximations of the vectors and parallelizes it. In more detail, we parallelize NN-search based on the VA-File in a Network of Workstations (NOW). This approach reduces search time to a reasonable level for large collections. The best speedup we have observed is by almost 30 for a NOW with only three components with 900 MB of feature data. But this requires a number of design decisions, in particular when taking load dynamism and heterogeneity of components into account. Our contribution is to address these design issues.

Cite

CITATION STYLE

APA

Weber, R., Böhm, K., & Schek, H. J. (2000). Interactive-time similarity search for large image collections using parallel VA-files. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1923, pp. 83–92). Springer Verlag. https://doi.org/10.1007/3-540-45268-0_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free