Head gesture recognition using feature interpolation

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses a technique of recognizing a head gesture. The proposed system is composed of eye tracking and head motion decision. The eye tracking step is divided into face detection, eye location and eye feature interpolation. Face detection obtains the face region using integrated feature space. Multiple Bayesian classifiers are employed for selection of face candidate windows on integrated feature space. Eye location extracts the location of eyes from the detected face region. Eye location is performed at the region close to a pair of eyes for real-time eye tracking. If a pair of eyes is not located, the system can estimate feature vector using mean velocity measure(MVM). After eye tracking, the coordinates of the detected eyes are transformed into the normalized vector of the x-coordinate and the y-coordinate. Head gesture recognition using HMMs. Head gesture can be recognized by HMMs those are adapted by a directional vector. The directional vector represents the direction of head movement. The HMMs vector can also be used to determine neutral as well as positive and negative gesture. The experimental results are reported. These techniques are implemented on a lot of images and a notable success is notified. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Yeon, G. K., & Phill, K. R. (2006). Head gesture recognition using feature interpolation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4251 LNAI-I, pp. 582–589). Springer Verlag. https://doi.org/10.1007/11892960_70

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free