Vision-based reacquisition for task-level control

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We describe a vision-based algorithm that enables a robot to “reacquire” objects previously indicated by a human user through simple image-based stylus gestures. By automatically generating a multiple-view appearance model for each object, the method can reacquire the object and reconstitute the user’s segmentation hints even after the robot has moved long distances or significant time has elapsed since the gesture. We demonstrate that this capability enables novel command and control mechanisms: after a human gives the robot a “guided tour” of named objects and their locations in the environment, he can dispatch the robot to fetch any particular object simply by stating its name. We implement the object reacquisition algorithmon an outdoor mobile manipulation platform and evaluate its performance under challenging conditions that include lighting and viewpoint variation, clutter, and object relocation.

Cite

CITATION STYLE

APA

Walter, M. R., Friedman, Y., Antone, M., & Teller, S. (2014). Vision-based reacquisition for task-level control. In Springer Tracts in Advanced Robotics (Vol. 79, pp. 493–507). Springer Verlag. https://doi.org/10.1007/978-3-642-28572-1_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free