We leverage eye-tracking data to predict user performance and levels of cognitive abilities while reading magazine-style narrative visualizations (MSNV), a widespread form of multimodal documents that combine text and visualizations. Such predictions are motivated by recent interest in devising user-adaptive MSNVs that can dynamically adapt to a user's needs. Our results provide evidence for the feasibility of real-time user modeling in MSNV, as we are the first to consider eye tracking data for predicting task comprehension and cognitive abilities while processing multimodal documents. We follow with a discussion on the implications to the design of personalized MSNVs.
CITATION STYLE
Barral, O., Lallé, S., Guz, G., Iranpour, A., & Conati, C. (2020). Eye-Tracking to Predict User Cognitive Abilities and Performance for User-Adaptive Narrative Visualizations. In ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 163–173). Association for Computing Machinery, Inc. https://doi.org/10.1145/3382507.3418884
Mendeley helps you to discover research relevant for your work.