Dense Pedestrian Detection Based on YOLO-V4 Network Reconstruction and CIoU Loss Optimization

6Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Online object detection is a fundamental problem in time-critical video analysis applications. Due to the performance limitations of one-stage object detection algorithms in dense pedestrian occlusion, we have improved YOLO-V4 in this paper, including network structure optimization, more efficient multi-scale feature fusion strategy formulation, and more specialized network loss function design. First, a single-output YOLO-V4 network structure is proposed, which integrates image information from multiple scales through the designed ladder fusion strategy. This not only ensures that the aspect ratio estimation of anchors is still driven by training data but also solves the invalid anchor distribution problem of the original network for objects with approximate size. Second, we adjust the resolution ratio of the network output feature map to the original input image to reduce the label rewriting cases of training samples. Finally, the concept of repulsive force is introduced to optimize the bounding box regression loss function, which improves the robustness of the model to the detection of densely occluded pedestrians, and enhances the practical value of YOLO-V4 in actual application scenarios.

Cite

CITATION STYLE

APA

Zhang, G., Du, Z., Lu, W., & Meng, X. (2022). Dense Pedestrian Detection Based on YOLO-V4 Network Reconstruction and CIoU Loss Optimization. In Journal of Physics: Conference Series (Vol. 2171). IOP Publishing Ltd. https://doi.org/10.1088/1742-6596/2171/1/012019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free