Sensors have been developed and applied in a wide range of fields such as robotics and autonomous vehicle navigation (AVN). Due to the inability of a single sensor to fully sense its surroundings, multiple sensors based on individual specialties are commonly used in order to complement the shortcomings and enrich perception. However, it is challenging to integrate the heterogeneous types of sensory information and produce useful results. This research aims to achieve a high degree of accuracy with a minimum false-positive and false-negative rate for the sake of reliability and safety. This paper introduces a likelihood-based data fusion model, which integrates information from various sensors, maps it into the integrated data space and generates the solution considering all the information from the sensors. Two distinct sensors: an optical camera and a LIght Detection And Range (Lidar) sensor were used for the experiment. The experimental results showed the usefulness of the proposed model in comparison with single sensor outcomes.
CITATION STYLE
Jo, J., Tsunoda, Y., Stantic, B., & Liew, A. W. C. (2017). A likelihood-based data fusion model for the integration of multiple sensor data: A case study with vision and lidar sensors. In Advances in Intelligent Systems and Computing (Vol. 447, pp. 489–500). Springer Verlag. https://doi.org/10.1007/978-3-319-31293-4_39
Mendeley helps you to discover research relevant for your work.