A vision-based system for grasping novel objects in cluttered environments

19Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present our vision-based system for grasping novel objects in cluttered environments. Our system can be divided into four components: 1) decide where to grasp an object, 2) perceive obstacles, 3) plan an obstacle-free path, and 4) follow the path to grasp the object. While most prior work assumes availability of a detailed 3-d model of the environment, our system focuses on developing algorithms that are robust to uncertainty and missing data, which is the case in real-world experiments. In this paper, we test our robotic grasping system using our STAIR (STanford AI Robots) platforms on two experiments: grasping novel objects and unloading items from a dishwasher. We also illustrate these ideas in the context of having a robot fetch an object from another room in response to a verbal request. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Saxena, A., Wong, L., Quigley, M., & Ng, A. Y. (2010). A vision-based system for grasping novel objects in cluttered environments. In Springer Tracts in Advanced Robotics (Vol. 66, pp. 337–348). https://doi.org/10.1007/978-3-642-14743-2_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free