Leveraging redundancy in sampling-interpolation applications for sensor networks

3Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An important class of sensor network applications aims at estimating the spatiotemporal behavior of a physical phenomenon, such as temperature variations over an area of interest. These networks thereby essentially act as a distributed sampling system. However, unlike in the event detection class of sensor networks, the notion of sensing range is largely meaningless in this case. As a result, existing techniques to exploit sensing redundancy for event detection, which rely on the existence of such sensing range, become unusable. Instead, this paper presents a new method to exploit redundancy for the sampling class of applications, which adaptively selects the smallest set of reporting sensors to act as sampling points. By projecting the sensor space onto an equivalent Hilbert space, this method ensures sufficiently accurate sampling and interpolation, without a priori knowledge of the statistical structure of the physical process. Results are presented using synthetic sensor data and show significant reductions in the number of active sensors. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Liaskovits, P., & Schurgers, C. (2007). Leveraging redundancy in sampling-interpolation applications for sensor networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4549 LNCS, pp. 324–337). Springer Verlag. https://doi.org/10.1007/978-3-540-73090-3_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free