Ad Hoc-Obstacle Avoidance-Based Navigation System Using Deep Reinforcement Learning for Self-Driving Vehicles

6Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this research, a novel navigation algorithm for self-driving vehicles that avoids collisions with pedestrians and ad hoc obstacles is described. The proposed algorithm predicts the locations of ad hoc obstacles and wandering pedestrians by using an RGB-D depth sensor. Unique ad hoc-obstacle-aware mobility rules are presented considering those environmental uncertainties. A Deep Reinforcement Learning (DRL) algorithm is proposed as a decision-making technique (to steer the self-driving vehicle to reach the target without incident). The deep Q-network (DQN), double deep Q-network (DDQN), and dueling double deep Q-network (D3DQN) algorithms were compared, and the D3DQN had the fewest negative rewards. We tested the algorithms using the Carla simulation environment to examine the input values from the RGB-D and RGB-Lidar. The series of algorithms that make up the convoluted neural network D3DQN was consequently selected as the optimum DRL algorithm. In the modeling of slow-moving urban traffic, RGB-D and RGB-Lidar generated essentially the same results. A self-driving version of an updated child-ride-on-car was modified to demonstrate the real-time effectiveness of the proposed algorithm.

Cite

CITATION STYLE

APA

Manikandan, N. S., Kaliyaperumal, G., & Wang, Y. (2023). Ad Hoc-Obstacle Avoidance-Based Navigation System Using Deep Reinforcement Learning for Self-Driving Vehicles. IEEE Access, 11, 92285–92297. https://doi.org/10.1109/ACCESS.2023.3297661

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free