GraspVDN: scene-oriented grasp estimation by learning vector representations of grasps

10Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Grasp estimation is a fundamental technique crucial for robot manipulation tasks. In this work, we present a scene-oriented grasp estimation scheme taking constraints of the grasp pose imposed by the environment into consideration and training on samples satisfying the constraints. We formulate valid grasps for a parallel-jaw gripper as vectors in a two-dimensional (2D) image and detect them with a fully convolutional network that simultaneously estimates the vectors’ origins and directions. The detected vectors are then converted to 6 degree-of-freedom (6-DOF) grasps with a tailored strategy. As such, the network is able to detect multiple grasp candidates from a cluttered scene in one shot using only an RGB image as input. We evaluate our approach on the GraspNet-1Billion dataset and archived comparable performance as state-of-the-art while being efficient in runtime.

Cite

CITATION STYLE

APA

Dong, Z., Tian, H., Bao, X., Yan, Y., & Chen, F. (2022). GraspVDN: scene-oriented grasp estimation by learning vector representations of grasps. Complex and Intelligent Systems, 8(4), 2911–2922. https://doi.org/10.1007/s40747-021-00459-x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free