Deep learning for real time facial expression recognition in social robots

15Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Human robot interaction is a rapidly growing topic of interest in today’s society. The development of real time emotion recognition will further improve the relationship between humans and social robots. However, contemporary real time emotion recognition in unconstrained environments has yet to reach the accuracy levels achieved on controlled static datasets. In this work, we propose a Deep Convolutional Neural Network (CNN), pre-trained as a Stacked Convolutional Autoencoder (SCAE) in a greedy layer-wise unsupervised manner, for emotion recognition from facial expression images taken by a NAO robot. The SCAE model is trained to learn an illumination invariant down-sampled feature vector. The weights of the encoder element are then used to initialize the CNN model, which is fine-tuned for classification. We train the model on a corpus composed of gamma corrected versions of the CK+, JAFFE, FEEDTUM and KDEF datasets. The emotion recognition model produces a state-of-the-art accuracy rate of 99.14% on this corpus. We also show that the proposed training approach significantly improves the CNN’s generalisation ability by over 30% on nonuniform data collected with the NAO robot in unconstrained environments.

Cite

CITATION STYLE

APA

Ruiz-Garcia, A., Webb, N., Palade, V., Eastwood, M., & Elshaw, M. (2018). Deep learning for real time facial expression recognition in social robots. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11305 LNCS, pp. 392–402). Springer Verlag. https://doi.org/10.1007/978-3-030-04221-9_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free