Multimodal Data Fusion to Track Students’ Distress during Educational Gameplay

3Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Using multimodal data fusion techniques, we built and tested prediction models to track middle-school student distress states during educational gameplay. We collected and analyzed 1,145 data instances, sampled from a total of 31 middle-school students’ audio-and video-recorded gameplay sessions. We conducted data wrangling with student gameplay data from multiple data sources, such as individual facial expression recordings and gameplay logs. Using supervised machine learning, we built and tested candidate classifiers that yielded an estimated probability of distress states. We then conducted confidence-based data fusion that averaged the estimated probability scores from the unimodal classifiers with a single data source. The results of this study suggest that the classifier with multimodal data fusion improves the performance of tracking distress states during educational gameplay, compared to the performance of unimodal classifiers. The study finding suggests the feasibility of multimodal data fusion in developing game-based learning analytics. Also, this study proposes the benefits of optimizing several methodological means for multimodal data fusion in educational game research.

Cite

CITATION STYLE

APA

Moon, J., Ke, F., Sokolikj, Z., & Dahlstrom-Hakki, I. (2022). Multimodal Data Fusion to Track Students’ Distress during Educational Gameplay. Journal of Learning Analytics, 9(3), 75–87. https://doi.org/10.18608/jla.2022.7631

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free