Affordance Prediction via Learned Object Attributes

  • Hermans T
  • Rehg J
  • Bobick A
N/ACitations
Citations of this article
67Readers
Mendeley users who have this article in their library.

Abstract

We present a novel method for learning and predicting the affordances of an object based on its physical and visual attributes. Affordance prediction is a key task in autonomous robot learning, as it allows a robot to reason about the actions it can perform in order to accomplish its goals. Previous approaches to affordance prediction have either learned direct mappings from visual features to affordances, or have introduced object categories as an intermediate rep- resentation. In this paper, we argue that physical and visual attributes provide a more appropriate mid-level representation for affordance prediction, because they support information- sharing between affordances and objects, resulting in superior generalization performance. In particular, affordances are more likely to be correlated with the attributes of an object than they are with its visual appearance or a linguistically-derived object category. We provide preliminary validation of our method experimentally, and present empirical comparisons to both the direct and category-based approaches of affordance prediction. Our encouraging results suggest the promise of the attribute- based approach to affordance prediction.

Cite

CITATION STYLE

APA

Hermans, T., Rehg, J. M., & Bobick, A. F. (2011). Affordance Prediction via Learned Object Attributes. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 1–8). Retrieved from https://www.cs.utah.edu/~thermans/papers/hermans-icra-spme2011.pdf

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free