Detection and recognition of traffic signs are the keys to creating advanced driving assistance systems. Making highly precise maps also requires the identification and extraction of such road elements as traffic signs. Traditional detection and recognition methods no longer meet today's needs, and object recognition algorithms based on deep learning have become the mainstream solution. However, current algorithms have limitations. The recognition speed of one-stage strategy algorithms is fast, but recognition accuracy is not satisfactory especially for small objects. The accuracy of two-stage algorithms is higher, but the recognition speed is extremely slow. This paper solves these problems with a proposed parallel attention convolution module, a channel attention pyramid network, and a loss function diagonal and center point IoU based on the YOLOv3 algorithm. The improved models in this paper are compared with SSD, YOLOv3, and Faster RCNN. Experimental results show that the proposed models have some improvement over the above models: the mAP of the models with PACM, CAFPN, and DCPIoU was 76.02%, compared with SSD300, SSD500, Faster RCNN, and YOLOv3, which had improvements of 9.27%, 6.93%, 2.94, and 5.3%, respectively. And the FPS of the improved model is basically the same as the original YOLOv3, without reducing the real-time performance.
CITATION STYLE
Huang, H., Liang, Q., Luo, D., & Lee, D. H. (2022). Attention-Enhanced One-Stage Algorithm for Traffic Sign Detection and Recognition. Journal of Sensors, 2022. https://doi.org/10.1155/2022/3705256
Mendeley helps you to discover research relevant for your work.