We present our vision-based system for grasping novel objects in cluttered environments. Our system can be divided into four components: 1) decide where to grasp an object, 2) perceive obstacles, 3) plan an obstacle-free path, and 4) follow the path to grasp the object. While most prior work assumes availability of a detailed 3-d model of the environment, our system focuses on developing algorithms that are robust to uncertainty and missing data, which is the case in real-world experiments. In this paper, we test our robotic grasping system using our STAIR (STanford AI Robots) platforms on two experiments: grasping novel objects and unloading items from a dishwasher. We also illustrate these ideas in the context of having a robot fetch an object from another room in response to a verbal request. © 2010 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Saxena, A., Wong, L., Quigley, M., & Ng, A. Y. (2010). A vision-based system for grasping novel objects in cluttered environments. In Springer Tracts in Advanced Robotics (Vol. 66, pp. 337–348). https://doi.org/10.1007/978-3-642-14743-2_28
Mendeley helps you to discover research relevant for your work.