Trustworthy Academic Risk Prediction with Explainable Boosting Machines

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The use of predictive models in education promises individual support and personalization for students. To develop trustworthy models, we need to understand what factors and causes contribute to a prediction. Thus, it is necessary to develop models that are not only accurate but also explainable. Moreover, we need to conduct holistic model evaluations that also quantify explainability or other metrics next to established performance metrics. This paper explores the use of Explainable Boosting Machines (EBMs) for the task of academic risk prediction. EBMs are an extension of Generative Additive Models and promise a state-of-the-art performance on tabular datasets while being inherently interpretable. We demonstrate the benefits of using EBMs in the context of academic risk prediction trained on online learning behavior data and show the explainability of the model. Our study shows that EBMs are equally accurate as other state-of-the-art approaches while being competitive on relevant metrics for trustworthy academic risk prediction such as earliness, stability, fairness, and faithfulness of explanations. The results encourage the broader use of EBMs for other Artificial Intelligence in education tasks.

Cite

CITATION STYLE

APA

Dsilva, V., Schleiss, J., & Stober, S. (2023). Trustworthy Academic Risk Prediction with Explainable Boosting Machines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13916 LNAI, pp. 463–475). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-36272-9_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free