Emotion Recognition from Occluded Facial Images Using Deep Ensemble Model

30Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

Facial expression recognition has been a hot topic for decades, but high intraclass variation makes it challenging. To overcome intraclass variation for visual recognition, we introduce a novel fusion methodology, in which the proposed model first extract features followed by feature fusion. Specifically, RestNet-50, VGG-19, and Inception-V3 is used to ensure feature learning followed by feature fusion. Finally, the three feature extraction models are utilized using Ensemble Learning techniques for final expression classification. The representation learnt by the proposed methodology is robust to occlusions and pose variations and offers promising accuracy. To evaluate the efficiency of the proposed model, we use two wild benchmark datasets Real-world Affective Faces Database (RAF-DB) and AffectNet for facial expression recognition. The proposed model classifies the emotions into seven different categories namely: happiness, anger, fear, disgust, sadness, surprise, and neutral. Furthermore, the performance of the proposed model is also compared with other algorithms focusing on the analysis of computational cost, convergence and accuracy based on a standard problem specific to classification applications.

Cite

CITATION STYLE

APA

Ullah, Z., Mohmand, M. I., ur Rehman, S., Zubair, M., Driss, M., Boulila, W., … Alwawi, I. (2022). Emotion Recognition from Occluded Facial Images Using Deep Ensemble Model. Computers, Materials and Continua, 73(3), 4465–4487. https://doi.org/10.32604/cmc.2022.029101

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free