A Multi-sensor Information Fusion Method for Autonomous Vehicle Perception System

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Within the context of the environmental perception of autonomous vehicles (AVs), this paper establishes a sensor model based on the experimental sensor fusion of lidar and monocular cameras. The sensor fusion algorithm can map three-dimensional space coordinate points to a two-dimensional plane based on both space synchronization and time synchronization. The YOLO target recognition and density clustering algorithms obtain the data fusion containing the obstacles’ visual information and depth information. Furthermore, the experimental results show the high accuracy of the proposed sensor data fusion algorithm.

Cite

CITATION STYLE

APA

Mei, P., Karimi, H. R., Ma, F., Yang, S., & Huang, C. (2022). A Multi-sensor Information Fusion Method for Autonomous Vehicle Perception System. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST (Vol. 442 LNICST, pp. 633–646). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-06371-8_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free