Recognizing Emotion in the Wild using Multimodal Data

6Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work, we present our approach for all four tracks of the eighth Emotion Recognition in the Wild Challenge (EmotiW 2020). The four tasks are group emotion recognition, driver gaze prediction, predicting engagement in the wild, and emotion recognition using physiological signals. We explore multiple approaches including classical machine learning tools such as random forests, state of the art deep neural networks, and multiple fusion and ensemble-based approaches. We also show that similar approaches can be used across tracks as many of the features generalize well to the different problems (e.g. facial features). We detail evaluation results that are either comparable to or outperform the baseline results for both the validation and testing for most of the tracks.

Cite

CITATION STYLE

APA

Srivastava, S., Lakshminarayan, S. A. Si., Hinduja, S., Jannat, S. R., Elhamdadi, H., & Canavan, S. (2020). Recognizing Emotion in the Wild using Multimodal Data. In ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 849–857). Association for Computing Machinery, Inc. https://doi.org/10.1145/3382507.3417970

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free