A Hierarchical Framework for Quadruped Robots Gait Planning Based on DDPG

13Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

In recent years, significant progress has been made in employing reinforcement learning for controlling legged robots. However, a major challenge arises with quadruped robots due to their continuous states and vast action space, making optimal control using simple reinforcement learning controllers particularly challenging. This paper introduces a hierarchical reinforcement learning framework based on the Deep Deterministic Policy Gradient (DDPG) algorithm to achieve optimal motion control for quadruped robots. The framework consists of a high-level planner responsible for generating ideal motion parameters, a low-level controller using model predictive control (MPC), and a trajectory generator. The agents within the high-level planner are trained to provide the ideal motion parameters for the low-level controller. The low-level controller uses MPC and PD controllers to generate the foot-end force and calculates the joint motor torque through inverse kinematics. The simulation results show that the motion performance of the trained hierarchical framework is superior to that obtained using only the DDPG method.

Cite

CITATION STYLE

APA

Li, Y., Chen, Z., Wu, C., Mao, H., & Sun, P. (2023). A Hierarchical Framework for Quadruped Robots Gait Planning Based on DDPG. Biomimetics, 8(5). https://doi.org/10.3390/biomimetics8050382

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free