Deep facial emotion recognition in video using eigenframes

20Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

Recently, video-based facial emotion recognition (FER) has been an attractive topic in the computer vision society. However, processing several hundreds of frames for a single video of a particular emotion is not efficient. In this study, the authors propose a novel approach to obtain a representative set of frames for a video in the eigenspace domain. Principal component analysis (PCA) is applied to a single emotional video extracting the most significant eigenframes representing the temporal motion variance embedded in the video. Given that faces are segmented and normalised, the variance captured by PCA is attributed to the facial expression dynamics. The variation in the temporal domain is mapped to the eigenspace reducing the redundancy. The proposed approach is used to extract the input eigenframes. Later, VGG-16, ResNet50, and 2D and 3D CNN architectures called eigenFaceNet are trained on the RML, eNTERFACE'05, and AFEW 6.0 databases. The experimental results are superior to the state-of-the-art by 8 and 4% for RML, eNTERFACE'05 databases, respectively. The performance achievement is also coupled with a reduction in the computational time.

Cite

CITATION STYLE

APA

Hajarolasvadi, N., & Demirel, H. (2020). Deep facial emotion recognition in video using eigenframes. IET Image Processing, 14(14), 3536–3546. https://doi.org/10.1049/iet-ipr.2019.1566

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free