Graspcnn: Real-Time grasp detection using a new oriented diameter circle representation

23Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper proposes GraspCNN, an approach to grasp detection where a feasible robotic grasp is detected as an oriented diameter circle in RGB image, using a single convolutional neural network. By detecting robotic grasps as oriented diameter circles, grasp representation is thereby simplified. In addition to our novel grasp representation, a grasp pose localization algorithm is proposed to project an oriented diameter circle back to a 6D grasp pose in point cloud. GraspCNN predicts feasible grasping circles and grasp probabilities directly from RGB image. Experiments show that GraspCNN achieves a 96.5% accuracy on the Cornell Grasping Dataset, outperforming existing one-stage detectors for grasp detection. GraspCNN is fast and stable, which can process RGB image at 50 fps and meet the requirements of real-Time applications. To detect objects and locate feasible grasps simultaneously, GraspCNN is executed in parallel with YOLO, which achieves outstanding performance on both object detection and grasp detection.

Cite

CITATION STYLE

APA

Xu, Y., Wang, L., Yang, A., & Chen, L. (2019). Graspcnn: Real-Time grasp detection using a new oriented diameter circle representation. IEEE Access, 7, 159322–159331. https://doi.org/10.1109/ACCESS.2019.2950535

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free