Towards Interpretable Deep Learning Models for Knowledge Tracing

21Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Driven by the fast advancements of deep learning techniques, deep neural network has been recently adopted to design knowledge tracing (KT) models for achieving better prediction performance. However, the lack of interpretability of these models has painfully impeded their practical applications, as their outputs and working mechanisms suffer from the intransparent decision process and complex inner structures. We thus propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models. Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model by backpropagating the relevance from the model’s output layer to its input layer. The experiment results show the feasibility using the LRP method for interpreting the DLKT model’s predictions, and partially validate the computed relevance scores. We believe it can be a solid step towards fully interpreting the DLKT models and promote their practical applications.

Cite

CITATION STYLE

APA

Lu, Y., Wang, D., Meng, Q., & Chen, P. (2020). Towards Interpretable Deep Learning Models for Knowledge Tracing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12164 LNAI, pp. 185–190). Springer. https://doi.org/10.1007/978-3-030-52240-7_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free