Multi-camera and multi-modal sensor fusion, an architecture overview

3Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper outlines an architecture formulti-camera andmulti-modal sensor fusion.We define a high-level architecture in which image sensors like standard color, thermal, and time of flight cameras can be fused with high accuracy location systems based on UWB, Wifi, Bluetooth or RFID technologies. This architecture is specially well-suited for indoor environments, where such heterogeneous sensors usually coexists. The main advantage of such a system is that a combined nonredundant output is provided for all the detected targets. The fused output includes in its simplest form the location of each target, including additional features depending of the sensors involved in the target detection, e.g., location plus thermal information. This way, a surveillance or context-aware system obtains more accurate and complete information than only using one kind of technology. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Bustamante, A. L., Molina, J. M., & Patricio, M. A. (2010). Multi-camera and multi-modal sensor fusion, an architecture overview. In Advances in Intelligent and Soft Computing (Vol. 79, pp. 301–308). https://doi.org/10.1007/978-3-642-14883-5_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free