Multi-feature based emotion recognition for video clips

104Citations
Citations of this article
101Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present our latest progress in Emotion Recognition techniques, which combines acoustic features and facial features in both non-temporal and temporal mode. This paper presents the details of our techniques used in the Audio-Video Emotion Recognition subtask in the 2018 Emotion Recognition in the Wild (EmotiW) Challenge. After the multimodal results fusion, our final accuracy in Acted Facial Expression in Wild (AFEW) test dataset achieves 61.87%, which is 1.53% higher than the best results last year. Such improvements prove the effectiveness of our methods.

Cite

CITATION STYLE

APA

Liu, C., Tang, T., Wang, M., & Lv, K. (2018). Multi-feature based emotion recognition for video clips. In ICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction (pp. 630–634). Association for Computing Machinery, Inc. https://doi.org/10.1145/3242969.3264989

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free