As the core technology in the field of mobile robots, the development of robot obstacle avoidance technology substantially enhances the running stability of robots. Built on path planning or guidance, most existing obstacle avoidance methods underperform with low efficiency in complicated and unpredictable environments. In this paper, we propose an obstacle avoidance method with a hierarchical controller based on deep reinforcement learning, which can realize more efficient adaptive obstacle avoidance without path planning. The controller, with multiple neural networks, contains an action selector and an action runner consisting of two neural network strategies and two single actions. Action selectors and each neural network strategy are separately trained in a simulation environment before being deployed on a robot. We validated the method on wheeled robots. More than 200 tests yield a success rate of up to 90%.
CITATION STYLE
Tang, Y., Chen, Q., & Wei, Y. (2022). Robot Obstacle Avoidance Controller Based on Deep Reinforcement Learning. Journal of Sensors, 2022. https://doi.org/10.1155/2022/4194747
Mendeley helps you to discover research relevant for your work.