Deep Reinforcement Learning for Multi-contact Motion Planning of Hexapod Robots

11Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Legged locomotion in a complex environment requires careful planning of the footholds of legged robots. In this paper, a novel Deep Reinforcement Learning (DRL) method is proposed to implement multi-contact motion planning for hexapod robots moving on uneven plum-blossom piles. First, the motion of hexapod robots is formulated as a Markov Decision Process (MDP) with a specified reward function. Second, a transition feasibility model is proposed for hexapod robots, which describes the feasibility of the state transition under the condition of satisfying kinematics and dynamics, and in turn determines the rewards. Third, the footholds and Center-of-Mass (CoM) sequences are sampled from a diagonal Gaussian distribution and the sequences are optimized through learning the optimal policies using the designed DRL algorithm. Both of the simulation and experimental results on physical systems demonstrate the feasibility and efficiency of the proposed method. Videos are shown at https://videoviewpage.wixsite.com/mcrl.

Cite

CITATION STYLE

APA

Fu, H., Tang, K., Li, P., Zhang, W., Wang, X., Deng, G., … Chen, C. (2021). Deep Reinforcement Learning for Multi-contact Motion Planning of Hexapod Robots. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2381–2388). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/328

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free