Automated Assessment of Student Self-explanation During Source Code Comprehension

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

This paper presents a novel method to automatically assess self-explanations generated by students during code comprehension activities. The self-explanations are produced in the context of an online learning environment that asks students to freely explain Java code examples line-by-line. We explored a number of models consisting of textual features in conjunction with machine learning algorithms such as Support Vector Regression (SVR), Decision Trees (DT), and Random Forests (RF). Support Vector Regression (SVR) performed best having a correlation score with human judgments of 0.7088. The best model used a combination of features such as semantic measures obtained using a Sentence BERT pre-trained model and from previously developed semantic algorithms used in a state-of-the-art intelligent tutoring system.

Cite

CITATION STYLE

APA

Chapagain, J., Tamang, L., Banjade, R., Oli, P., & Rus, V. (2022). Automated Assessment of Student Self-explanation During Source Code Comprehension. In Proceedings of the International Florida Artificial Intelligence Research Society Conference, FLAIRS (Vol. 35). Florida Online Journals, University of Florida. https://doi.org/10.32473/flairs.v35i.130540

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free