Robot Navigation in Crowded Environments: A Reinforcement Learning Approach

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

For a mobile robot, navigation in a densely crowded space can be a challenging and sometimes impossible task, especially with traditional techniques. In this paper, we present a framework to train neural controllers for differential drive mobile robots that must safely navigate a crowded environment while trying to reach a target location. To learn the robot’s policy, we train a convolutional neural network using two Reinforcement Learning algorithms, Deep Q-Networks (DQN) and Asynchronous Advantage Actor Critic (A3C) and develop a training pipeline that allows to scale the process to several compute nodes. We show that the asynchronous training procedure in A3C can be leveraged to quickly train neural controllers and test them on a real robot in a crowded environment.

Cite

CITATION STYLE

APA

Caruso, M., Regolin, E., Camerota Verdù, F. J., Russo, S. A., Bortolussi, L., & Seriani, S. (2023). Robot Navigation in Crowded Environments: A Reinforcement Learning Approach. Machines, 11(2). https://doi.org/10.3390/machines11020268

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free