Deep learning for emotion recognition in faces

24Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep Learning (DL) has shown real promise for the classification efficiency for emotion recognition problems. In this paper we present experimental results for a deeply-trained model for emotion recognition through the use of facial expression images. We explore two Convolutional Neural Network (CNN) architectures that offer automatic feature extraction and representation, followed by fully connected softmax layers to classify images into seven emotions. The first architecture explores the impact of reducing the number of deep learning layers and the second splits the input images horizontally into two streams based on eye and mouth positions. The first proposed architecture produces state of the art results with an accuracy rate of 96.93% and the second architecture with split input produces an average accuracy rate of 86.73 %, respectively.

Cite

CITATION STYLE

APA

Ruiz-Garcia, A., Elshaw, M., Altahhan, A., & Palade, V. (2016). Deep learning for emotion recognition in faces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9887 LNCS, pp. 38–46). Springer Verlag. https://doi.org/10.1007/978-3-319-44781-0_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free