Gesture recognition using a marionette model and Dynamic Bayesian Networks (DBNs)

18Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a framework for gesture recognition by modeling a system based on Dynamic Bayesian Networks (DBNs) from a Marionette point of view. To incorporate human qualities like anticipation and empathy inside the perception system of a social robot remains, so far an open issue. It is our goal to search for ways of implementation and test the feasibility. Towards this end we started the development of the guide robot 'Nicole' equipped with a monocular camera and an inertial sensor to observe its environment. The context of interaction is a person performing gestures and 'Nicole' reacting by means of audio output and motion. In this paper we present a solution to the gesture recognition task based on Dynamic Bayesian Network (DBN). We show that using a DBN is a human-like concept of recognizing gestures that encompass the quality of anticipation through the concept of prediction and update. A novel approach is used by incorporating a marionette model in the DBN as a trade-off between simple constant acceleration models and complex articulated models. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Rett, J., & Dias, J. (2006). Gesture recognition using a marionette model and Dynamic Bayesian Networks (DBNs). In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4142 LNCS, pp. 69–80). Springer Verlag. https://doi.org/10.1007/11867661_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free