Upper body gesture recognition for human-robot interaction

6Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper proposes a vision-based human-robot interaction system for mobile robot platform. A mobile robot first finds an interested person who wants to interact with it. Once it finds a subject, the robot stops in the front of him or her and finally interprets her or his upper body gestures. We represent each gesture as a sequence of body poses and the robot recognizes four upper body gestures: "Idle", "I love you", "Hello left", and "Hello right". A key posebased particle filter determines the pose sequence and key poses are sparsely collected from the pose space. Pictorial Structure-based upper body model represents key poses and these key poses are used to build an efficient proposal distribution for the particle filtering. Thus, the particles are drawn from key pose-based proposal distribution for the effective prediction of upper body pose. The Viterbi algorithm estimates the gesture probabilities with a hidden Markov model. The experimental results show the robustness of our upper body tracking and gesture recognition system. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Oh, C. M., Islam, M. Z., Lee, J. S., Lee, C. W., & Kweon, I. S. (2011). Upper body gesture recognition for human-robot interaction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6762 LNCS, pp. 294–303). https://doi.org/10.1007/978-3-642-21605-3_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free