Context-Based Personalized Predictors of the Length of Written Responses to Open-Ended Questions of Elementary School Students

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the main goals of elementary school STEM teachers is that their students write their own explanations. However, analyzing answers to question that promotes writing is difficult and time consuming, so a system that supports teachers on this task is desirable. For elementary school students, the extension of the texts, is a basic component of several metrics of the complexity of their answers. In this paper we attempt to develop a set of predictors of the length of written responses to open questions. To do so, we use the history of hundreds elementary school students exposed to open questions posed by teachers on an online STEM platform. We analyze four different context-based personalized predictors. The predictors consider for each student the historical impact on the student answers of a limited number of keywords present on the question. We collected data along a whole year, taking the data of the first semester to train our predictors and evaluate them on the second semester. We found that with a history of as little as 20 questions, a context based personalized predictor beats a baseline predictor.

Cite

CITATION STYLE

APA

Araya, R., Jiménez, A., & Aguirre, C. (2018). Context-Based Personalized Predictors of the Length of Written Responses to Open-Ended Questions of Elementary School Students. In Studies in Computational Intelligence (Vol. 769, pp. 135–146). Springer Verlag. https://doi.org/10.1007/978-3-319-76081-0_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free