Learning features robust to image variations with siamese networks for facial expression recognition

14Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a computationally efficient method for learning features robust to image variations for facial expression recognition (FER). The proposed method minimizes the feature difference between an image under a variable image variation and a corresponding target image with the best image conditions for FER (i.e. frontal face image with uniform illumination). This is achieved by regulating the objective function during the learning process where a Siamese network is employed. At the test stage, the learned network parameters are transferred to a convolutional neural network (CNN) with which the features robust to image variations can be obtained. Experiments have been conducted on the Multi-PIE dataset to evaluate the proposed method under a large number of variations including pose and illumination. The results show that the proposed method improves the FER performance under different variations without requiring extra computational complexity.

Cite

CITATION STYLE

APA

Baddar, W. J., Kim, D. H., & Ro, Y. M. (2017). Learning features robust to image variations with siamese networks for facial expression recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10132 LNCS, pp. 189–200). Springer Verlag. https://doi.org/10.1007/978-3-319-51811-4_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free