Improving humanoid locomotive performance with learnt approximated dynamics via Gaussian processes for regression

  • Morimoto J
  • Atkeson C
  • Endo G
 et al. 
  • 29


    Mendeley users who have this article in their library.
  • 16


    Citations of this article.


We propose to improve the locomotive performance of humanoid robots by using approximated biped stepping and walking dynamics with reinforcement learning (RL). Although RL is a useful non-linear optimizer, it is usually difficult to apply RL to real robotic systems - due to the large number of iterations required to acquire suitable policies. In this study, we first approximated the dynamics by using data from a real robot, and then applied the estimated dynamics in RL in order to improve stepping and walking policies. Gaussian processes were used to approximate the dynamics. By using Gaussian processes, we could estimate a probability distribution of a target function with a given covariance function. Thus, RL can take the uncertainty of the approximated dynamics into account throughout the learning process. We show that we can improve stepping and walking policies by using a RL method with the approximated models both in simulated and real environments. Experimental validation on a real humanoid robot of the proposed

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


  • Jun Morimoto

  • Christopher G. Atkeson

  • Gen Endo

  • Gordon Cheng

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free