Emotion Recognition Expressed on the Face By Multimodal Method using Deep Learning

N/ACitations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Emotional recognition plays a vital role in the behavioral and emotional interactions between humans. It is a difficult task because it relies on the prediction of abstract emotional states from multimodal input data. Emotion recognition systems operate in three phases. A first that consists of taking input data from the real world through sensors. Then extract the emotional characteristics to predict the emotion. To do this, methods are used to exaction and classification. Deep learning methods allow recognition in different ways. In this article, we are interested in facial expression. We proceed to the extraction of emotional characteristics expressed on the face in two ways by two different methods. On the one hand, we use Gabor filters to extract textures and facial appearances for different scales and orientations. On the other hand, we extract movements of the face muscles namely eyes, eyebrows, nose and mouth. Then we make an entire classification using the convolutional neural networks (CNN) and then a decision-level merge. The convolutional network model has been training and validating on datasets.

Cite

CITATION STYLE

APA

Emotion Recognition Expressed on the Face By Multimodal Method using Deep Learning. (2019). International Journal of Engineering and Advanced Technology, 9(2), 886–891. https://doi.org/10.35940/ijeat.a1825.129219

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free