Learning human priors for task-constrained grasping

11Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.

Cite

CITATION STYLE

APA

Hjelm, M., Ek, C. H., Detry, R., & Kragic, D. (2015). Learning human priors for task-constrained grasping. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9163, pp. 207–217). Springer Verlag. https://doi.org/10.1007/978-3-319-20904-3_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free