This paper describes work in progress on the development of a plug-in automatic gesture recogniser intended for a human-computer natural speech-and-gesture dialogue interface module. A model-based approach is suggested for the gesture recognition. Gesture models are built from models of the trajectories selected finger tips traverse through physical 3D space of the human gesturer when he performs the different gestures of interest. Gesture capture employs in the initial version a data glove, later computer vision is intended. The paper outlines at a general level the gesture model design and argues for its choice, as well as the rationale behind the entire work is laid forth. As the recogniser is not yet fully implemented no test results can be presented so far.
CITATION STYLE
Munk, K. H. (2002). Development of a gesture plug-in for natural dialogue interfaces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2298, pp. 47–58). Springer Verlag. https://doi.org/10.1007/3-540-47873-6_5
Mendeley helps you to discover research relevant for your work.