Model the dynamic evolution of facial expression from image sequences

2Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Facial Expression Recognition (FER) has been a challenging task for decades. In this paper, we model the dynamic evolution of facial expression by extracting both temporal appearance features and temporal geometry features from image sequences. To extract the pairwise feature evolution, our approach consists of two different models. The first model combines convolutional layers and temporal recursion to extract dynamic appearance features from raw images. While the other model focuses on the geometrical variations based on facial landmarks, in which a novel 2-distance representation and resample technique are also proposed. These two models are combined by weighted method in order to boost the performance of recognition. We test our approach on three widely used databases: CK+, Oulu-CASIA and MMI. The experimental results show that we achieve state-of-the-art accuracy. Moreover, our models have minor-setup parameters and can work for the variable-length frame sequences input, which is flexible in practical applications.

Cite

CITATION STYLE

APA

Huan, Z., & Shang, L. (2018). Model the dynamic evolution of facial expression from image sequences. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10938 LNAI, pp. 546–557). Springer Verlag. https://doi.org/10.1007/978-3-319-93037-4_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free