A likelihood-based data fusion model for the integration of multiple sensor data: A case study with vision and lidar sensors

11Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sensors have been developed and applied in a wide range of fields such as robotics and autonomous vehicle navigation (AVN). Due to the inability of a single sensor to fully sense its surroundings, multiple sensors based on individual specialties are commonly used in order to complement the shortcomings and enrich perception. However, it is challenging to integrate the heterogeneous types of sensory information and produce useful results. This research aims to achieve a high degree of accuracy with a minimum false-positive and false-negative rate for the sake of reliability and safety. This paper introduces a likelihood-based data fusion model, which integrates information from various sensors, maps it into the integrated data space and generates the solution considering all the information from the sensors. Two distinct sensors: an optical camera and a LIght Detection And Range (Lidar) sensor were used for the experiment. The experimental results showed the usefulness of the proposed model in comparison with single sensor outcomes.

Cite

CITATION STYLE

APA

Jo, J., Tsunoda, Y., Stantic, B., & Liew, A. W. C. (2017). A likelihood-based data fusion model for the integration of multiple sensor data: A case study with vision and lidar sensors. In Advances in Intelligent Systems and Computing (Vol. 447, pp. 489–500). Springer Verlag. https://doi.org/10.1007/978-3-319-31293-4_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free