Multi-Scale Object Detection Using Feature Fusion Recalibration Network

4Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, the object detection algorithm based on deep learning running on the robot platform is studied and optimized. The p has high requirements for the detection efficiency and scale invariance of the algorithm. In order to improve the detection accuracy on all scales and keep the balance between speed and accuracy, we propose the following methods: Aiming at the problem of low detection accuracy of object detection algorithm for scale changing objects, the traditional image pyramid technology of computer vision is used to verify its effectiveness in improving the detection accuracy of the algorithm for scale changing objects. Then, by embedding the image pyramid into the network, the memory consumption caused by the traditional pyramid is reduced, and the detection accuracy of the algorithm for different scale objects is improved. A new feature fusion recalibration structure is designed. Feature fusion can fuse the low-level location information and high-level semantic information. The recalibration assigns the importance weight of the channel of the feature maps. This structure can effectively improve the detection accuracy of the algorithm at all scales without losing too much speed. We apply these two structures to YOLO. The accuracy of the improved algorithm has a significant improvement and the algorithm can run at 16 FPS on a TITAN Xp GPU.

Cite

CITATION STYLE

APA

Guo, Z., Zhang, W., Liang, Z., Shi, Y., & Huang, Q. (2020). Multi-Scale Object Detection Using Feature Fusion Recalibration Network. IEEE Access, 8, 51664–51673. https://doi.org/10.1109/ACCESS.2020.2980737

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free