Learning grasping affordances from local visual descriptors

  • Montesano L
  • Lopes M
  • 71

    Readers

    Mendeley users who have this article in their library.
  • 36

    Citations

    Citations of this article.

Abstract

In this paper we study the learning of affordances through self-experimentation. We study the learning of local visual descriptors that anticipate the success of a given action executed upon an object. Consider, for instance, the case of grasping. Although graspable is a property of the whole object, the grasp action will only succeed if applied in the right part of the object. We propose an algorithm to learn local visual descriptors of good grasping points based on a set of trials performed by the robot. The method estimates the probability of a successful action (grasp) based on simple local features. Experimental results on a humanoid robot illustrate how our method is able to learn descriptors of good grasping points and to generalize to novel objects based on prior experience.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Luis Montesano

  • Manuel Lopes

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free