Robot Obstacle Avoidance Controller Based on Deep Reinforcement Learning

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

As the core technology in the field of mobile robots, the development of robot obstacle avoidance technology substantially enhances the running stability of robots. Built on path planning or guidance, most existing obstacle avoidance methods underperform with low efficiency in complicated and unpredictable environments. In this paper, we propose an obstacle avoidance method with a hierarchical controller based on deep reinforcement learning, which can realize more efficient adaptive obstacle avoidance without path planning. The controller, with multiple neural networks, contains an action selector and an action runner consisting of two neural network strategies and two single actions. Action selectors and each neural network strategy are separately trained in a simulation environment before being deployed on a robot. We validated the method on wheeled robots. More than 200 tests yield a success rate of up to 90%.

Cite

CITATION STYLE

APA

Tang, Y., Chen, Q., & Wei, Y. (2022). Robot Obstacle Avoidance Controller Based on Deep Reinforcement Learning. Journal of Sensors, 2022. https://doi.org/10.1155/2022/4194747

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free