Joint modeling of facial expression and shape from video

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we present a novel model for representing facial feature point tracks during an facial expression. The model is composed of a static shape part and a time-dependent expression part. We learn the model by tracking the points of interest in video recordings of trained actors making different facial expressions. Our results indicate that the proposed sum of two linear models - a person-dependent shape model and a person-independent expression model - approximates the true feature point motion well. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Tamminen, T., Kätsyri, J., Frydrych, M., & Lampinen, J. (2005). Joint modeling of facial expression and shape from video. In Lecture Notes in Computer Science (Vol. 3540, pp. 151–160). Springer Verlag. https://doi.org/10.1007/11499145_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free