In this paper, we present a novel model for representing facial feature point tracks during an facial expression. The model is composed of a static shape part and a time-dependent expression part. We learn the model by tracking the points of interest in video recordings of trained actors making different facial expressions. Our results indicate that the proposed sum of two linear models - a person-dependent shape model and a person-independent expression model - approximates the true feature point motion well. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Tamminen, T., Kätsyri, J., Frydrych, M., & Lampinen, J. (2005). Joint modeling of facial expression and shape from video. In Lecture Notes in Computer Science (Vol. 3540, pp. 151–160). Springer Verlag. https://doi.org/10.1007/11499145_17
Mendeley helps you to discover research relevant for your work.