Vision based acquisition of mouth actions for human-computer interaction

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We describe a computer vision based system that allows use of movements of the mouth for human-computer interaction (HCI). The lower region of the face is tracked by locating and tracking the position of the nostrils. The location of the nostrils determines a sub-region of the image from which the cavity of the open mouth may be segmented. Shape features of the open mouth can then be used for continuous real-time data input, for human-computer interaction. Several applications of the head-tracking mouth controller are described. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

De Silva, G. C., Lyons, M. J., & Tetsutani, N. (2004). Vision based acquisition of mouth actions for human-computer interaction. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3157, pp. 959–960). Springer Verlag. https://doi.org/10.1007/978-3-540-28633-2_113

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free