CURIOSITY-DRIVEN REINFORCEMENT LEARNING AGENT for MAPPING UNKNOWN INDOOR ENVIRONMENTS

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Autonomously exploring and mapping is one of the open challenges of robotics and artificial intelligence. Especially when the environments are unknown, choosing the optimal navigation directive is not straightforward. In this paper, we propose a reinforcement learning framework for navigating, exploring, and mapping unknown environments. The reinforcement learning agent is in charge of selecting the commands for steering the mobile robot, while a SLAM algorithm estimates the robot pose and maps the environments. The agent, to select optimal actions, is trained to be curious about the world. This concept translates into the introduction of a curiosity-driven reward function that encourages the agent to steer the mobile robot towards unknown and unseen areas of the world and the map. We test our approach in explorations challenges in different indoor environments. The agent trained with the proposed reward function outperforms the agents trained with reward functions commonly used in the literature for solving such tasks.

Cite

CITATION STYLE

APA

Botteghi, N., Schulte, R., Sirmacek, B., Poel, M., & Brune, C. (2021). CURIOSITY-DRIVEN REINFORCEMENT LEARNING AGENT for MAPPING UNKNOWN INDOOR ENVIRONMENTS. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Vol. 5, pp. 129–136). Copernicus GmbH. https://doi.org/10.5194/isprs-annals-V-1-2021-129-2021

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free