Predicting learners’ emotions in mobile MOOC learning via a multimodal intelligent tutor

17Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Massive Open Online Courses (MOOCs) are a promising approach for scalable knowledge dissemination. However, they also face major challenges such as low engagement, low retention rate, and lack of personalization. We propose AttentiveLearner2, a multimodal intelligent tutor running on unmodified smartphones, to supplement today’s clickstream-based learning analytics for MOOCs. AttentiveLearner2 uses both the front and back cameras of a smartphone as two complementary and fine-grained feedback channels in real time: the back camera monitors learners’ photoplethysmography (PPG) signals and the front camera tracks their facial expressions during MOOC learning. AttentiveLearner2 implicitly infers learners’ affective and cognitive states during learning from their PPG signals and facial expressions. Through a 26-participant user study, we found that: (1) AttentiveLearner2 can detect 6 emotions in mobile MOOC learning reliably with high accuracy (average accuracy = 84.4%); (2) the detected emotions can predict learning outcomes (best R2 = 50.6%); and (3) it is feasible to track both PPG signals and facial expressions in real time in a scalable manner on today’s unmodified smartphones.

Cite

CITATION STYLE

APA

Pham, P., & Wang, J. (2018). Predicting learners’ emotions in mobile MOOC learning via a multimodal intelligent tutor. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10858 LNCS, pp. 150–159). Springer Verlag. https://doi.org/10.1007/978-3-319-91464-0_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free