Multimodal Modeling of Task-Mediated Confusion

2Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

In order to build more human-like cognitive agents, systems capable of detecting various human emotions must be designed to respond appropriately. Confusion, the combination of an emotional and cognitive state, is under-explored. In this paper, we build upon prior work to develop models that detect confusion from three modalities: video (facial features), audio (prosodic features), and text (transcribed speech features). Our research improves the data collection process by allowing for continuous (as opposed to discrete) annotation of confusion levels. We also craft models based on recurrent neural networks (RNNs) given their ability to predict sequential data. In our experiments, we find that text and video modalities are the most important in predicting confusion while the explored audio features are relatively unimportant predictors of confusion in our data.

Cite

CITATION STYLE

APA

Mince, C., Rhomberg, S., Alm, C. O., Bailey, R., & Ororbia, A. (2022). Multimodal Modeling of Task-Mediated Confusion. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Student Research Workshop (pp. 188–194). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-srw.24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free