Quorum based image retrieval in large scale visual sensor networks

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A recent publication by [SPKK] introduces a framework and set of rules by which object recognition can work on a visual sensor network. Extracted features of the detected object are flooded (with reduced dimensionality at each hop) in the network. The Sensor will match the corresponding feature of the new object with a locally stored one, and send the query on the backward link toward the original detector for matching. Based on their framework we introduce an algorithm which attempts to minimize the number of messages passed within the network when performing an image retrieval task. Extracted features are distributed along a row, while query matching progresses along a column. We compare our results to the algorithm proposed by [SPKK] and achieve fewer transmissions in the retrieval step, and avoid flooding in the pre-processing phase. We expand our algorithm by constructing an information mesh of multiple detections of the same object, to achieve matching with the nearest copy. We also propose a novel feature reduction method, by diving the image into k2 subimages, and extracting features in each subimage. This allows replacing histogram based features with a wide range of other options. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Milovanovic, S., & Stojmenovic, M. (2012). Quorum based image retrieval in large scale visual sensor networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7363 LNCS, pp. 449–458). https://doi.org/10.1007/978-3-642-31638-8_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free