Real-time Target Recognition for Urban Autonomous Vehicles Based on Information Fusion

10Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Aiming at the problem that the single sensor has insufficient sensing dimensions and poor real-time performance, a real-time target recognition method for urban autonomous vehicles based on the fusion of lidar and camera is proposed. To achieve pixel-level matching of the two sensors, a coordinate transformation model between the two sensors is established; the yolov3-tiny algorithm is improved to increase the accuracy of target detection. Voxel grid filtering was performed on the lidar points, the ground is filtered according to the lidar point slope; the model of clustering radius and distance is established, and non-ground point clouds are clustered; the idea of envelope in images is introduced to obtain the 3D bounding box and pose information of the target; the visual target features are fused with the lidar target features. The experimental results show that the improved yolov3-tiny algorithm has a higher recognition rate for dense urban targets. The lidar algorithm can complete three-dimensional target detection and pose estimation. The fusion recognition system meets the actual driving requirements in terms of accuracy and real-time performance.

Cite

CITATION STYLE

APA

Xue, P., Wu, Y., Yin, G., Liu, S., Lin, Y., Huang, W., & Zhang, Y. (2020). Real-time Target Recognition for Urban Autonomous Vehicles Based on Information Fusion. Jixie Gongcheng Xuebao/Journal of Mechanical Engineering, 56(12), 165–173. https://doi.org/10.3901/JME.2020.12.165

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free