Mobile robots exploration through cnn-based reinforcement learning

  • Tai L
  • Liu M
N/ACitations
Citations of this article
83Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.

Cite

CITATION STYLE

APA

Tai, L., & Liu, M. (2016). Mobile robots exploration through cnn-based reinforcement learning. Robotics and Biomimetics, 3(1). https://doi.org/10.1186/s40638-016-0055-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free