Learning transferable policies with improved graph neural networks on serial robotic structure

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Robotic control via reinforcement learning (RL) has made significant advances. However, a serious weakness with this method is that RL models are prone to overfitting and have poor transfer performance. Transfer in reinforcement learning means that only a few samples are needed to train policy networks for new tasks. In this paper we investigate the problem of learning transferable policies for robots with serial structures, such as robotic arms, with the help of graph neural networks (GNN). The GNN was previously employed to incorporate explicitly the robot structure into the policy network, and thus make the policy easier to be generalized or transferred. Based on a kinematics analysis particularly on the serial robotic structure, in this paper we further improve the policy network by proposing a weighted information aggregation strategy. The experiment is conducted in a few-shot policy learning setting on a robotic arm. The experimental results show that the new aggregation strategy significantly improves the performance not only on the learning speed, but also on the policy accuracy.

Cite

CITATION STYLE

APA

Zhang, F., Xiong, F., Yang, X., & Liu, Z. (2019). Learning transferable policies with improved graph neural networks on serial robotic structure. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11955 LNCS, pp. 115–126). Springer. https://doi.org/10.1007/978-3-030-36718-3_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free