Multi-feature Fusion for Deep Reinforcement Learning: Sequential Control of Mobile Robots

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Compared with traditional motion planners, deep reinforcement learning has been applied more and more widely to achieving sequential behaviours control of mobile robots in indoor environment. However, the state of robot in deep reinforcement learning is commonly obtained through single sensor, which lacks accuracy and stability. In this paper, we propose a novel approach called multi-feature fusion framework. The multi-feature fusion framework utilizes multiple sensors to gather different scene images around the robot. Once environment information is gathered, a well-trained autoencoder achieves the fusion and extraction of multiple visual features. With more accurate and stable states extracted from the autoencoder, we train the mobile robot to patrol and navigate in 3D simulation environment with an asynchronous deep reinforcement learning algorithm. Extensive simulation experiments demonstrate that the proposed multi-feature fusion framework improves not only the convergence rate of training phase but also the testing performance of the mobile robot.

Cite

CITATION STYLE

APA

Wang, H., Yang, W., Huang, W., Lin, Z., & Tang, Y. (2018). Multi-feature Fusion for Deep Reinforcement Learning: Sequential Control of Mobile Robots. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11307 LNCS, pp. 303–315). Springer Verlag. https://doi.org/10.1007/978-3-030-04239-4_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free