Computer vision body modeling for gesture based teleoperation

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Dependable robots and teleoperation, taken in its broadest sense, require natural and friendly human-robot interaction systems. The work presented consists of a methodology for human-robot interaction based on the perception of human intention from vision and force. The vision system interprets human gestures from the integration of a stereovision and a carving system, from which it extracts a model of the human body when a person approaches the robot. The interaction can be performed by contact as well, from the perception of the forces applied to the robot either through a force sensor on the wrist or a sensing skin. The perception of human intention makes possible an intuitive interaction to modify on line the robot trajectory when required. © 2007 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Frigola, M., Rodriguez, A., Amat, J., & Casals, A. (2007). Computer vision body modeling for gesture based teleoperation. Springer Tracts in Advanced Robotics, 31, 121–137. https://doi.org/10.1007/978-3-540-71364-7_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free