Dealing with Ambiguity in Robotic Grasping via Multiple Predictions

10Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Humans excel in grasping and manipulating objects because of their life-long experience and knowledge about the 3D shape and weight distribution of objects. However, the lack of such intuition in robots makes robotic grasping an exceptionally challenging task. There are often several equally viable options of grasping an object. However, this ambiguity is not modeled in conventional systems that estimate a single, optimal grasp position. We propose to tackle this problem by simultaneously estimating multiple grasp poses from a single RGB image of the target object. Further, we reformulate the problem of robotic grasping by replacing conventional grasp rectangles with grasp belief maps, which hold more precise location information than a rectangle and account for the uncertainty inherent to the task. We augment a fully convolutional neural network with a multiple hypothesis prediction model that predicts a set of grasp hypotheses in under 60 ms, which is critical for real-time robotic applications. The grasp detection accuracy reaches over $$90\%$$ for unseen objects, outperforming the current state of the art on this task.

Cite

CITATION STYLE

APA

Ghazaei, G., Laina, I., Rupprecht, C., Tombari, F., Navab, N., & Nazarpour, K. (2019). Dealing with Ambiguity in Robotic Grasping via Multiple Predictions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11364 LNCS, pp. 38–55). Springer Verlag. https://doi.org/10.1007/978-3-030-20870-7_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free