Learning grasping affordances from local visual descriptors

63Citations
Citations of this article
81Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we study the learning of affordances through self-experimentation. We study the learning of local visual descriptors that anticipate the success of a given action executed upon an object. Consider, for instance, the case of grasping. Although graspable is a property of the whole object, the grasp action will only succeed if applied in the right part of the object. We propose an algorithm to learn local visual descriptors of good grasping points based on a set of trials performed by the robot. The method estimates the probability of a successful action (grasp) based on simple local features. Experimental results on a humanoid robot illustrate how our method is able to learn descriptors of good grasping points and to generalize to novel objects based on prior experience. ©2009 IEEE.

Cite

CITATION STYLE

APA

Montesano, L., & Lopes, M. (2009). Learning grasping affordances from local visual descriptors. In 2009 IEEE 8th International Conference on Development and Learning, ICDL 2009. https://doi.org/10.1109/DEVLRN.2009.5175529

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free