Reinforcement learning algorithm with CTRNN in continuous action space

2Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There are some difficulties in applying traditional reinforcement learning algorithms to motion control tasks of robot. Because most algorithms are concerned with discrete actions and based on the assumption of complete observability of the state. This paper deals with these two problems by combining the reinforcement learning algorithm and CTRNN learning algorithm. We carried out an experiment on the pendulum swing-up task without rotational speed information. It is shown that the information about the rotational speed, which is considered as a hidden state, is estimated and encoded on the activation of a context neuron. As a result, this task is accomplished in several hundred trials using the proposed algorithm. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Arie, H., Namikawa, J., Ogata, T., Tani, J., & Sugano, S. (2006). Reinforcement learning algorithm with CTRNN in continuous action space. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4232 LNCS, pp. 387–396). Springer Verlag. https://doi.org/10.1007/11893028_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free