Learning Memory-Based Control for Human-Scale Bipedal Locomotion

30Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.

Abstract

Controlling a non-statically stable biped is a difficult problem largely due to the complex hybrid dynamics involved. Recent work has demonstrated the effectiveness of reinforcement learning (RL) for simulation-based training of neural network controllers that successfully transfer to real bipeds. The existing work, however, has primarily used simple memoryless network architectures, even though more sophisticated architectures, such as those including memory, often yield superior performance in other RL domains. In this work, we consider recurrent neural networks (RNNs) for sim-to-real biped locomotion, allowing for policies that learn to use internal memory to model important physical properties. We show that while RNNs are able to significantly outperform memoryless policies in simulation, they do not exhibit superior behavior on the real biped due to overfitting to the simulation physics unless trained using dynamics randomization to prevent overfitting; this leads to consistently better sim-to-real transfer. We also show that RNNs could use their learned memory states to perform online system identification by encoding parameters of the dynamics into memory.

Cite

CITATION STYLE

APA

Siekmann, J., Valluri, S., Dao, J., Bermillo, L., Duan, H., Fern, A., & Hurst, J. (2020). Learning Memory-Based Control for Human-Scale Bipedal Locomotion. In Robotics: Science and Systems. MIT Press Journals. https://doi.org/10.15607/RSS.2020.XVI.031

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free