Pedestrian detection based on YOLOv3 multimodal data fusion

10Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Multi-sensor fusion has essential applications in the field of target detection. Considering the current actual demand for miniaturization of on-board computers for driverless vehicles, this paper uses the multimodal data YOLOv3 (MDY) algorithm for pedestrian detection on embedded devices. The MDY algorithm uses YOLOv3 as the basic framework to improve pedestrian detection accuracy by optimizing anchor frames and adding small target detection branches. Then the algorithm is accelerated by using TensorRT technology to improve the real-time performance in embedded devices. Finally, a hybrid fusion framework is used to fuse the LIDAR point cloud data with the improved YOLOv3 algorithm to compensate for the shortcomings of a single sensor and improve the detection accuracy while ensuring speed. The improved YOLOv3 improves AP by 6.4% and speed by 11.3 FPS over the original algorithm. The MDY algorithm achieves better performance on the KITTI dataset. To further verify the feasibility of the MDY algorithm, an actual test was conducted on an unmanned vehicle with Jetson TX2 embedded device as the on-board computer within the campus scenario, and the results showed that the MDY algorithm achieves 90.8% accuracy under real-time operation and can achieve adequate detection accuracy and real-time performance on the embedded device.

Cite

CITATION STYLE

APA

Wang, C., Liu, Y. sheng, Chang, F. xiang, & Lu, M. (2022). Pedestrian detection based on YOLOv3 multimodal data fusion. Systems Science and Control Engineering, 10(1), 832–845. https://doi.org/10.1080/21642583.2022.2129507

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free