Reinforcement {L}earning for {H}umanoid {R}obotics

  • Peters J
  • Vijayakumar S
  • Schaal S
  • 1


    Mendeley users who have this article in their library.
  • N/A


    Citations of this article.


Reinforcement learning offers one of the most general framework to take traditional robotics towards true autonomy and versatility. {H}owever, applying reinforcement learning to high dimensional movement systems like humanoid robots remains an unsolved problem. {I}n this paper, we discuss different approaches of reinforcement learning in terms of their applicability in humanoid robotics. {M}ethods can be coarsely classified into three different categories, i.e., greedy methods, `vanilla' policy gradient methods, and natural gradient methods. {W}e discuss that greedy methods are not likely to scale into the domain humanoid robotics as they are problematic when used with function approximation. `{V}anilla' policy gradient methods on the other hand have been successfully applied on real-world robots including at least one humanoid robot [3]. {W}e demonstrate that these methods can be significantly improved using the natural policy gradient instead of the regular policy gradient. {A} derivation of the natural policy gradient is provided, proving that the average policy gradient of {K}akade [10] is indeed the true natural gradient. {A} general algorithm for estimating the natural gradient, the {N}atural {A}ctor-{C}ritic algorithm, is introduced. {T}his algorithm converges to the nearest local minimum of the cost function with respect to the {F}isher information metric under suitable conditions. {T}he algorithm outperforms non-natural policy gradients by far in a cart-pole balancing evaluation, and for learning nonlinear dynamic motor primitives for humanoid robot control. {I}t offers a promising route for the development of reinforcement learning for truly high-dimensionally continuous state-action system.

Author-supplied keywords

  • Robotics RL Actor-Critic

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

There are no full text links


  • Jan Peters

  • Sethu Vijayakumar

  • Stefan Schaal

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free