Visual grasp affordances from appearance-based cues

11Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we investigate the prediction of visual grasp affordances from 2D measurements. Appearance-based estimation of grasp affordances is desirable when 3-D scans are unreliable due to clutter or material properties. We develop a general framework for estimating grasp affordances from 2-D sources, including local texture-like measures as well as object-category measures that capture previously learned grasp strategies. Local approaches to estimating grasp positions have been shown to be effective in real-world scenarios, but are unable to impart object-level biases and can be prone to false positives. We describe how global cues can be used to compute continuous pose estimates and corresponding grasp point locations, using a max-margin optimization for category-level continuous pose regression. We provide a novel dataset to evaluate visual grasp affordance estimation; on this dataset we show that a fused method outperforms either local or global methods alone, and that continuous pose estimation improves over discrete output models. © 2011 IEEE.

Cite

CITATION STYLE

APA

Song, H. O., Fritz, M., Gu, C., & Darrell, T. (2011). Visual grasp affordances from appearance-based cues. In Proceedings of the IEEE International Conference on Computer Vision (pp. 998–1005). https://doi.org/10.1109/ICCVW.2011.6130360

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free