Free-energy based reinforcement learning for vision-based navigation with high-dimensional sensory inputs

7Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Free-energy based reinforcement learning was proposed for learning in high-dimensional state and action spaces, which cannot be handled by standard function approximation methods in reinforcement learning. In the free-energy reinforcement learning method, the action-value function is approximated as the negative free energy of a restricted Boltzmann machine. In this paper, we test if it is feasible to use free-energy reinforcement learning for real robot control with raw, high-dimensional sensory inputs through the extraction of task-relevant features in the hidden layer. We first demonstrate, in simulation, that a small mobile robot could efficiently learn a vision-based navigation and battery capturing task. We then demonstrate, for a simpler battery capturing task, that free-energy reinforcement learning can be used for on-line learning in a real robot. The analysis of learned weights showed that action-oriented state coding was achieved in the hidden layer. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Elfwing, S., Otsuka, M., Uchibe, E., & Doya, K. (2010). Free-energy based reinforcement learning for vision-based navigation with high-dimensional sensory inputs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6443 LNCS, pp. 215–222). https://doi.org/10.1007/978-3-642-17537-4_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free