Video-based facial expression recognition by removing the style variations

11Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

This study examines the performance of person-independent facial expression recognition improved by adapting the system to a given person. The proposed method transfers the style of particular subjects to the semistyle-free space. There is no need to change the person-independent classifier in order to improve the performance. The style transfer mapping (STM) has been proposed in image-based classification. The challenges of employing this technique in video-based facial expression recognition are: estimating STM from image sequences of each subject (adaptation data) and projecting new sequential data of each subject in semi-style-free space. A mixture of 'binary support vector machines' and 'hidden Markov models' were employed to overcome these challenges. Moreover, virtual samples generated by using the person's neutral samples were used to estimate STM. Experimental results on the CK+ database confirm the efficiency of the proposed method in recognition rate improvement.

Cite

CITATION STYLE

APA

Mohammadian, A., Aghaeinia, H., & Towhidkhah, F. (2015). Video-based facial expression recognition by removing the style variations. IET Image Processing, 9(7), 596–603. https://doi.org/10.1049/iet-ipr.2013.0697

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free