Deep reinforcement learning for robust robot navigation in complex and crowded environments

3Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In complex environments with dense pedestrian traffic, mobile robots often experience errors and instability during trajectory tracking and dynamic obstacle avoidance tasks. This paper presents a scene perception and decision-making strategy combined with deep reinforcement learning. Temporal sequences of LiDAR data and sub-goal were used as input, and action output is generated via an end-to-end network. We designed an adaptive heading reward that guides the robot to proactively avoid pedestrians while efficiently moving toward its target. Through continuous interaction with a dynamic environment, the robot learns an optimal decision-making strategy by maximizing cumulative rewards. A series of simulation experiments and real-world validations demonstrate that the proposed strategy achieves an effective balance between collision avoidance and real-time performance in robotic navigation. Furthermore, extensive results confirm that the method remains robust in unfamiliar environments and in varying crowd densities. Finally, tests on a hardware platform indicate that the strategy offers strong stability and adaptability in practical applications, effectively meeting obstacle avoidance requirements and validating its reliability in complex dynamic settings.

Cite

CITATION STYLE

APA

Meng, J., Zou, J., Wang, S., Yang, R., Kumar, A., & Kim, J. (2025). Deep reinforcement learning for robust robot navigation in complex and crowded environments. Journal of King Saud University - Computer and Information Sciences, 37(10). https://doi.org/10.1007/s44443-025-00357-z

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free