Gesture recognition based on context awareness for human-robot interaction

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we describe an algorithm which can naturally communicate with human and robot for Human-Robot Interaction by utilizing vision. We propose a state transition model using attentive features for gesture recognition. This method defines the recognition procedure as five different states; NULL, OBJECT, POSE, Local Gesture and Global Gesture. We first infer the situation of the system by estimating the transition of the state model and then apply different recognition algorithms according to the system state for robust recognition. And we propose Active Plane Model (APM) that can represent 3D and 2D information of gesture simultaneously. This method is constructing a gesture space by analyzing the statistical information of training images with PCA and the symbolized images are recognized with HMM as one of model gestures. Therefore, proposed algorithm can be used for real world application efficiently such as controlling intelligent home appliance and humanoid robot. Keywords: Gesture Recognition, Context Awareness, PCA, HMM. © 2006 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Hong, S. J., Setiawan, N. A., Kim, S. G., & Lee, C. W. (2006). Gesture recognition based on context awareness for human-robot interaction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4282 LNCS, pp. 1–10). https://doi.org/10.1007/11941354_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free