Fusion of telemetric and visual data from road scenes with a lexus experimental platform

2Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Fusion of telemetric and visual data from traffic scenes helps exploit synergies between different on-board sensors, which monitor the environment around the ego-vehicle. This paper outlines our approach to sensor data fusion, detection and tracking of objects in a dynamic environment. The approach uses a Bayesian Occupancy Filter to obtain a spatio-temporal grid representation of the traffic scene.We have implemented the approach on our experimental platform on a Lexus car. The data is obtained in traffic scenes typical of urban driving, with multiple road participants. The data fusion results in a model of the dynamic environment of the ego-vehicle. The model serves for the subsequent analysis and interpretation of the traffic scene to enable collision risk estimation for improving the safety of driving. © 2011 IEEE.

Cite

CITATION STYLE

APA

Paromtchik, I. E., Perrollaz, M., & Laugier, C. (2011). Fusion of telemetric and visual data from road scenes with a lexus experimental platform. In IEEE Intelligent Vehicles Symposium, Proceedings (pp. 746–751). https://doi.org/10.1109/IVS.2011.5940571

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free