Compared with traditional motion planners, deep reinforcement learning has been applied more and more widely to achieving sequential behaviours control of mobile robots in indoor environment. However, the state of robot in deep reinforcement learning is commonly obtained through single sensor, which lacks accuracy and stability. In this paper, we propose a novel approach called multi-feature fusion framework. The multi-feature fusion framework utilizes multiple sensors to gather different scene images around the robot. Once environment information is gathered, a well-trained autoencoder achieves the fusion and extraction of multiple visual features. With more accurate and stable states extracted from the autoencoder, we train the mobile robot to patrol and navigate in 3D simulation environment with an asynchronous deep reinforcement learning algorithm. Extensive simulation experiments demonstrate that the proposed multi-feature fusion framework improves not only the convergence rate of training phase but also the testing performance of the mobile robot.
CITATION STYLE
Wang, H., Yang, W., Huang, W., Lin, Z., & Tang, Y. (2018). Multi-feature Fusion for Deep Reinforcement Learning: Sequential Control of Mobile Robots. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11307 LNCS, pp. 303–315). Springer Verlag. https://doi.org/10.1007/978-3-030-04239-4_27
Mendeley helps you to discover research relevant for your work.