Robotic grasping of novel objects

ISSN: 10495258
87Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. We demonstrate on a robotic manipulation platform that this approach successfully grasps a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set.

Cite

CITATION STYLE

APA

Saxena, A., Driemeyer, J., Kearns, J., & Ng, A. Y. (2007). Robotic grasping of novel objects. In Advances in Neural Information Processing Systems (pp. 1209–1216).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free