ELAKT: Enhancing Locality for Attentive Knowledge Tracing

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Knowledge tracing models based on deep learning can achieve impressive predictive performance by leveraging attention mechanisms. However, there still exist two challenges in attentive knowledge tracing (AKT): First, the mechanism of classical models of AKT demonstrates relatively low attention when processing exercise sequences with shifting knowledge concepts (KC), making it difficult to capture the comprehensive state of knowledge across sequences. Second, classical models do not consider stochastic behaviors, which negatively affects models of AKT in terms of capturing anomalous knowledge states. This article proposes a model of AKT, called Enhancing Locality for Attentive Knowledge Tracing (ELAKT), that is a variant of the deep KT model. The proposed model leverages the encoder module of the transformer to aggregate knowledge embedding generated by both exercises and responses over all timesteps. In addition, it uses causal convolutions to aggregate and smooth the states of local knowledge. The ELAKT model uses the states of comprehensive KCs to introduce a prediction correction module to forecast the future responses of students to deal with noise caused by stochastic behaviors. The results of experiments demonstrated that the ELAKT model consistently outperforms state-of-the-art baseline KT models.

Cite

CITATION STYLE

APA

Pu, Y., Liu, F., Shi, R., Yuan, H., Chen, R., Peng, T., & Wu, W. (2024). ELAKT: Enhancing Locality for Attentive Knowledge Tracing. ACM Transactions on Information Systems, 42(4). https://doi.org/10.1145/3652601

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free