Abstract
Autonomous driving has become a popular area of research in recent years, with accurate perception and recognition of the environment being critical for successful implementation. Traditional methods for recognizing and controlling steering rely on the color and shape of traffic lights and road lanes, which can limit their ability to handle complex scenarios and variations in data. This paper presents an optimization of the You Only Look Once (YOLO) object detection algorithm for traffic light detection and end-to-end steering control for lane-keeping in the simulation environment. The study compares the performance of YOLOv5, YOLOv6, YOLOv7, and YOLOv8 models for traffic light signal detection, with YOLOv8 achieving the best results with a mean Average Precision (mAP) of 98.5%. Additionally, the study proposes an end-to-end convolutional neural network (CNN) based steering angle controller that combines data from a classical proportional integral derivative (PID) controller and the steering angle controller from human perception. This controller predicts the steering angle accurately, outperforming conventional open-source computer vision (OpenCV) methods. The proposed algorithms are validated on an autonomous vehicle model in a simulated Gazebo environment of Robot Operating System 2 (ROS2).
Author supplied keywords
Cite
CITATION STYLE
Ngoc, H. T., Nguyen, K. H., Hua, H. K., Nguyen, H. V. N., & Quach, L. D. (2023). Optimizing YOLO Performance for Traffic Light Detection and End-to-End Steering Control for Autonomous Vehicles in Gazebo-ROS2. International Journal of Advanced Computer Science and Applications, 14(7), 475–484. https://doi.org/10.14569/IJACSA.2023.0140752
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.