Audio-video based multimodal emotion recognition using SVMs and deep learning

6Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we explored a multi-feature based classification framework for the Multimodal Emotion Recognition Challenge, which is part of the Chinese Conference on Pattern Recognition (CCPR 2016). The task of the challenge is to recognize one of eight facial emotions in short video segments extracted from Chinese films, TV plays and talk shows. In our framework, both traditional methods and Deep Convolutional Neural Network (DCNN) methods are used to extract various features. With different features, different classifiers are trained to predict video emotion labels. Moreover, a decision-level fusion method is explored to aggregate these different prediction results. According to the results on the competition database, our method shows better effectiveness on Chinese facial emotion.

Cite

CITATION STYLE

APA

Sun, B., Xu, Q., He, J., Yu, L., Li, L., & Wei, Q. (2016). Audio-video based multimodal emotion recognition using SVMs and deep learning. In Communications in Computer and Information Science (Vol. 663, pp. 621–631). Springer Verlag. https://doi.org/10.1007/978-981-10-3005-5_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free