Adapting preshaped grasping movements using vision descriptors

2Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Grasping is one of the most important abilities needed for future service robots. In the task of picking up an object from between clutter, traditional robotics approaches would determine a suitable grasping point and then use a movement planner to reach the goal. The planner would require precise and accurate information about the environment and long computation times, both of which are often not available. Therefore, methods are needed that execute grasps robustly even with imprecise information gathered only from standard stereo vision. We propose techniques that reactively modify the robot's learned motor primitives based on non-parametric potential fields centered on the Early Cognitive Vision descriptors. These allow both obstacle avoidance, and the adapting of finger motions to the object's local geometry. The methods were tested on a real robot, where they led to improved adaptability and quality of grasping actions. © 2010 Springer-Verlag.

Cite

CITATION STYLE

APA

Krömer, O., Detry, R., Piater, J., & Peters, J. (2010). Adapting preshaped grasping movements using vision descriptors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6226 LNAI, pp. 156–166). https://doi.org/10.1007/978-3-642-15193-4_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free