Algorithms for Context Learning and Information Representation for Multi-Sensor Teams

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sensor measurements of the state of a system are affected by natural and man-made operating conditions that are not accounted for in the definition of system states. It is postulated that these conditions, called contexts, are such that the measurements from individual sensors are independent conditioned on each pair of system state and context. This postulation leads to kernel-based unsupervised learning of a measurement model that defines a common context set for all different sensor modalities and automatically takes into account known and unknown contextual effects. The resulting measurement model is used to develop a context-aware sensor fusion technique for multi-modal sensor teams performing state estimation. Moreover, a symbolic compression technique, which replaces raw measurement data with their low-dimensional features in real time, makes the proposed context learning approach scalable to large amounts of data from heterogeneous sensors. The developed approach is tested with field experiments for multi-modal unattended ground sensors performing human walking style classification.

Cite

CITATION STYLE

APA

Virani, N., Sarkar, S., Lee, J. W., Phoha, S., & Ray, A. (2016). Algorithms for Context Learning and Information Representation for Multi-Sensor Teams. In Advances in Computer Vision and Pattern Recognition (pp. 403–427). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-319-28971-7_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free