Recognizing facial expressions of occluded faces using convolutional neural networks

14Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present an approach based on convolutional neural networks (CNNs) for facial expression recognition in a difficult setting with severe occlusions. More specifically, our task is to recognize the facial expression of a person wearing a virtual reality (VR) headset which essentially occludes the upper part of the face. In order to accurately train neural networks for this setting, in which faces are severely occluded, we modify the training examples by intentionally occluding the upper half of the face. This forces the neural networks to focus on the lower part of the face and to obtain better accuracy rates than models trained on the entire faces. Our empirical results on two benchmark data sets, FER+ and AffectNet, show that our CNN models’ predictions on lower-half faces are up to 13% higher than the baseline CNN models trained on entire faces, proving their suitability for the VR setting. Furthermore, our models’ predictions on lower-half faces are no more than 10% under the baseline models’ predictions on full faces, proving that there are enough clues in the lower part of the face to accurately predict facial expressions.

Cite

CITATION STYLE

APA

Georgescu, M. I., & Ionescu, R. T. (2019). Recognizing facial expressions of occluded faces using convolutional neural networks. In Communications in Computer and Information Science (Vol. 1142 CCIS, pp. 645–653). Springer. https://doi.org/10.1007/978-3-030-36808-1_70

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free