Hand-eye Coordination for Textual Difficulty Detection in Text Summarization

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The task of summarizing a document is a complex task that requires a person to multitask between reading and writing processes. Since a person's cognitive load during reading or writing is known to be dependent upon the level of comprehension or difficulty of the article, this suggests that it should be possible to analyze the cognitive process of the user when carrying out the task, as evidenced through their eye gaze and typing features, to obtain an insight into the different difficulty levels. In this paper, we categorize the summary writing process into different phases and extract different gaze and typing features from each phase according to characteristics of eye-gaze behaviors and typing dynamics. Combining these multimodal features, we build a classifier that achieves an accuracy of 91.0% for difficulty level detection, which is around 55% performance improvement above the baseline and at least 15% improvement above models built on a single modality. We also investigate the possible reasons for the superior performance of our multimodal features.

Cite

CITATION STYLE

APA

Wang, J., Ngai, G., & Leong, H. V. (2020). Hand-eye Coordination for Textual Difficulty Detection in Text Summarization. In ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 269–277). Association for Computing Machinery, Inc. https://doi.org/10.1145/3382507.3418831

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free