Audio-Visual Emotion Recognition System Using Multi-Modal Features

4Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Due to the highly variant face geometry and appearances, Facial Expression Recognition (FER) is still a challenging problem. CNN can characterize 2-D signals. Therefore, for emotion recognition in a video, the authors propose a feature selection model in AlexNet architecture to extract and filter facial features automatically. Similarly, for emotion recognition in audio, the authors use a deep LSTM-RNN. Finally, they propose a probabilistic model for the fusion of audio and visual models using facial features and speech of a subject. The model combines all the extracted features and use them to train the linear SVM (Support Vector Machine) classifiers. The proposed model outperforms the other existing models and achieves state-of-the-art performance for audio, visual and fusion models. The model classifies the seven known facial expressions, namely anger, happy, surprise, fear, disgust, sad, and neutral on the eNTERFACE'05 dataset with an overall accuracy of 76.61%.

Cite

CITATION STYLE

APA

Handa, A., Agarwal, R., & Kohli, N. (2021). Audio-Visual Emotion Recognition System Using Multi-Modal Features. International Journal of Cognitive Informatics and Natural Intelligence, 15(4). https://doi.org/10.4018/IJCINI.20211001.oa34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free