The task of real-time hand gesture recognition is extremely challenging due to a number of DOFs of hand pose and motion. However, for human-robot interaction in natural ways, gesture can provide a powerful interface tool for commanding a robot to perform a specific task. This paper presents a vision-based real-time gesture recognition system by segmenting the three largest skin color components and template-matching techniques with multiple features. Gesture commands are generated whenever the combinations of three skin-like regions at a particular image match with the predefined gestures. These gesture commands are sent to robots through a knowledge based software platform for human-robot interaction. The effectiveness of our method has been demonstrated by interacting with an entertainment robot named AIBO. © Springer-Verlag 2004.
CITATION STYLE
Hasanuzzaman, M., Zhang, T., Ampornaramveth, V., Bhuiyan, M. A., Shirai, Y., & Ueno, H. (2004). Gesture recognition for human-robot interaction through a knowledge based software platform. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3211, 530–537. https://doi.org/10.1007/978-3-540-30125-7_66
Mendeley helps you to discover research relevant for your work.