Facial expression helps to communicate between the people for conveying abundant information about human emotions. Facial expression classification is applied in various fields such as remote learning education, medical care, and smart traffic. However, due to the complexity and diversity of the facial emotions, the present facial expression recognition model causes a low recognition rate and it is hard to extract the precise features that are related to facial expression changes. In order to overcome this problem, we proposed Multi-feature Integrated Concurrent Neural Network (MICNN) which is significantly different from the single neural network architectures. It aggregates the prominent features of facial expressions by integrating the three kinds of networks such as Sequential Convolutional Neural Network (SCNN), Residual Dense Network (RDN), and Attention Residual Learning Network (ARLN) to enhance the accuracy rate of facial emotions detection system. Additionally, Local Binary Pattern (LBP) and Principal Component Analysis (PCA) are applied for representing the facial features and these features are combined with the texture features identified by the Gray-level Co-occurrence Matrix (GLCM). Finally, the integrated features are fed into softmax layer to classify the facial images. The experiments are carried out on benchmark datasets by applying k-fold cross-validation and the results demonstrate the superiority of the proposed model.
CITATION STYLE
Dhivyaa, C. R., Nithya, K., Karthika, K., & Mythili, S. (2022). Multi-Feature Integrated Concurrent Neural Network for Human Facial Expression Recognition. Journal of Internet Technology, 23(6), 1263–1274. https://doi.org/10.53106/160792642022112306009
Mendeley helps you to discover research relevant for your work.