The SYSU system for CCPR 2016 multimodal emotion recognition challenge

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a multimodal emotion recognition system that combines the information from the facial, text and speech data. First, we propose a residual network architecture within the convolutional neural networks (CNN) framework to improve the facial expression recognition performance. We also perform video frames selection to fine tune our pre-trained model. Second, while the text emotion recognition conventionally deal with the clean perfect texts, here we adopt an automatic speech recognition (ASR) engine to transcribe the speech into text and then apply Support Vector Machine (SVM) on top of bag-ofwords (BoW) features to predict the emotion labels. Third, we extract the openSMILE based utterance level feature and MFCC GMM based zero-order statistics feature for the subsequent SVM modeling in the speech based subsystem. Finally, score level fusion was used to combine the multimodal information. Experimental results were carried on the CCPR 2016 Multimodal Emotion Recognition Challenge database, our proposed multimodal system achieved 36% macro average precision on the test set which outperforms the baseline by 6% absolutely.

Cite

CITATION STYLE

APA

He, G., Chen, J., Liu, X., & Li, M. (2016). The SYSU system for CCPR 2016 multimodal emotion recognition challenge. In Communications in Computer and Information Science (Vol. 663, pp. 707–720). Springer Verlag. https://doi.org/10.1007/978-981-10-3005-5_58

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free