Explaining Errors in Predictions of At-Risk Students in Distance Learning Education

3Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Despite recognising the importance of transparency and understanding of predictive models, little effort has been made to investigate the errors made by these models. In this paper, we address this gap by interviewing 12 students whose results and predictions of submitting their assignment differed. Following our previous quantitative analysis of 25,000+ students, we conducted online interviews with two groups of students: those predicted to submit their assignment, yet they did not (False Negative) and those predicted not to submit, yet they did (False Positive). Interviews revealed that, in False Negatives, the non-submission of assignments was explained by personal, financial and practical reasons. Overall, the factors explaining the different outcomes were not related to any of the student data currently captured by the predictive model.

Cite

CITATION STYLE

APA

Hlosta, M., Papathoma, T., & Herodotou, C. (2020). Explaining Errors in Predictions of At-Risk Students in Distance Learning Education. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12164 LNAI, pp. 119–123). Springer. https://doi.org/10.1007/978-3-030-52240-7_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free