Categorical vs. dimensional representations in multimodal affect detection during learning

17Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learners experience a variety of emotions during learning sessions with Intelligent Tutoring Systems (ITS). The research community is building systems that are aware of these experiences, generally represented as a category or as a point in a low-dimensional space. State-of-the-art systems detect these affective states from multimodal data, in naturalistic scenarios. This paper provides evidence of how the choice of representation affects the quality of the detection system. We present a user-independent model for detecting learners' affective states from video and physiological signals using both the categorical and dimensional representations. Machine learning techniques are used for selecting the best subset of features and classifying the various degrees of emotions for both representations. We provide evidence that dimensional representation, particularly using valence, produces higher accuracy. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Hussain, M. S., Monkaresi, H., & Calvo, R. A. (2012). Categorical vs. dimensional representations in multimodal affect detection during learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7315 LNCS, pp. 78–83). https://doi.org/10.1007/978-3-642-30950-2_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free