Prediction of In-Class Performance Based on MFO-ATTENTION-LSTM

2Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we present a novel approach to predicting in-class performance using log data from course learning, which is important in the field of personalized education and classroom management. Specifically, a set of fine-grained features is extracted from unit learning log data to train a prediction model based on long short-term memory (LSTM). However, to enhance the accuracy of the model, we introduce moth flame optimization-attention-LSTM (MFO-Attention-LSTM) as an improvement to the conventional LSTM-attention model. The MFO algorithm is utilized instead of the traditional backward propagation method to calculate attention layer parameters, thereby allowing the model to jump out of local optima. The proposed model outperforms the SVM, CNN, RNN, LSTM, and LSTM-Attention models in terms of the F1 score. Empirical results demonstrate that the optimization of the MFO algorithm contributes significantly to the improved performance of the prediction model. In conclusion, the proposed MFO-Attention-LSTM model offers a promising solution for predicting in-class performance using log data from course learning and could provide valuable insights for personalized education and classroom management.

Cite

CITATION STYLE

APA

Qin, X., Wang, C., Yuan, Y. S., & Qi, R. (2024). Prediction of In-Class Performance Based on MFO-ATTENTION-LSTM. International Journal of Computational Intelligence Systems, 17(1). https://doi.org/10.1007/s44196-023-00395-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free