Single-Grasp Detection Based on Rotational Region CNN

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Object grasp detection is foundational to intelligent robotic manipulation. Different from typical object detection tasks, grasp detection tasks need to tackle the orientation of the graspable region in addition to localizing the region since the ground truth box of the grasp detection is arbitrary-oriented in the grasp datasets. This paper presents a novel method for single-grasp detection based on rotational region CNN (R2CNN). This method applies a common Region Proposal Network (RPN) to predict inclined graspable region, including location, scale, orientation, and grasp/non-grasp score. The idea is to deal with the grasp detection as a multi-task problem that involves multiple predictions, including predict grasp/non-grasp score, the inclined box and its corresponding axis-align bounding box. The inclined non-maximum suppression (NMS) method is used to compute the final predicted grasp rectangle. Experimental results indicate that the presented method can achieve accuracies of 94.6% (image-wise splitting) and 95.6% (object-wise splitting) on the Cornel Grasp Dataset, respectively. This method outperforms state-of-the-art grasp detection models that only use color images.

Cite

CITATION STYLE

APA

Jiang, S., Zhao, X., Cai, Z., Xiang, K., & Ju, Z. (2020). Single-Grasp Detection Based on Rotational Region CNN. In Advances in Intelligent Systems and Computing (Vol. 1043, pp. 131–141). Springer Verlag. https://doi.org/10.1007/978-3-030-29933-0_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free