Multifingered Grasping Based on Multimodal Reinforcement Learning

21Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work, we tackle the challenging problem of grasping novel objects using a high-DoF anthropomorphic hand-arm system. Combining fingertip tactile sensing, joint torques and proprioception, a multimodal agent is trained in simulation to learn the finger motions and to determine when to lift an object. Binary contact information and level-based joint torques simplify transferring the learned model to the real robot. To reduce the exploration space, we first generate postural synergies by collecting a dataset covering various grasp types and using principal component analysis. Curriculum learning is further applied to adjust and randomize the initial object pose based on the training performance. Simulation and real robot experiments with dedicated initial grasping poses show that our method outperforms two baseline models in the grasp success rate both for seen and unseen objects. This learning approach further serves as a fundamental technology for complex in-hand manipulations based on multi-sensory the system.

Cite

CITATION STYLE

APA

Liang, H., Cong, L., Hendrich, N., Li, S., Sun, F., & Zhang, J. (2022). Multifingered Grasping Based on Multimodal Reinforcement Learning. IEEE Robotics and Automation Letters, 7(2), 1174–1181. https://doi.org/10.1109/LRA.2021.3138545

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free