Template-based eye and mouth detection for 3D video conferencing

13Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The usage of 3D face animation techniques within video conference applications enables new features like viewpoint adaptation, stereo display, or virtual conferencing in shared synthetic rooms. Most of these systems require an automatic detection of facial feature points for tracking or initialization purposes. We have developed an automatic method for face feature detection using synthetic deformable templates. The algorithm does not require a training procedure or parameter set. It can be applied to images with different sizes of the face area. Iris-pupil centers, mouth corners and mouth inner lip line are robustly found with high accuracy from one still image. This automatic process allows to set up an advanced video conference system that uses 3D head models of the participants to synthesize new views. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Rurainsky, J., & Eisert, P. (2003). Template-based eye and mouth detection for 3D video conferencing. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2849, 23–31. https://doi.org/10.1007/978-3-540-39798-4_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free