MEC 2016: The multimodal emotion recognition challenge of CCPR 2016

35Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Emotion recognition is a significant research filed of pattern recognition and artificial intelligence. The Multimodal Emotion Recognition Challenge (MEC) is a part of the 2016 Chinese Conference on Pattern Recognition (CCPR). The goal of this competition is to compare multimedia processing and machine learning methods for multimodal emotion recognition. The challenge also aims to provide a common benchmark data set, to bring together the audio and video emotion recognition communities, and to promote the research in multimodal emotion recognition. The data used in this challenge is the Chinese Natural Audio- Visual Emotion Database (CHEAVD), which is selected from Chinese movies and TV programs. The discrete emotion labels are annotated by four experienced assistants. Three sub-challenges are defined: audio, video and multimodal emotion recognition. This paper introduces the baseline audio, visual features, and the recognition results by Random Forests.

Cite

CITATION STYLE

APA

Li, Y., Tao, J., Schuller, B., Shan, S., Jiang, D., & Jia, J. (2016). MEC 2016: The multimodal emotion recognition challenge of CCPR 2016. In Communications in Computer and Information Science (Vol. 663, pp. 667–678). Springer Verlag. https://doi.org/10.1007/978-981-10-3005-5_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free