Validity of a graph-based automatic assessment system for programming assignments: Human versus automatic grading

3Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Programming is a very complex and challenging subject to teach and learn. A strategy guaranteed to deliver proven results has been intensive and continual training. However, this strategy holds an extra workload for the teachers with huge numbers of programming assignments to evaluate in a fair and timely manner. Furthermore, under the current coronavirus (COVID-19) distance teaching circumstances, regular assessment is a fundamental feedback mechanism. It ensures that students engage in learning as well as determines the extent to which they reached the expected learning goals, in this new learning reality. In sum, automating the assessment process will be particularly appreciated by the instructors and highly beneficial to the students. The purpose of this paper is to investigate the feasibility of automatic assessment in the context of computer programming courses. Thus, a prototype based on merging static and dynamic analysis was developed. Empirical evaluation of the proposed grading tool within an introductory C-language course has been presented and compared to manually assigned marks. The outcomes of the comparative analysis have shown the reliability of the proposed automatic assessment prototype.

Cite

CITATION STYLE

APA

Zougari, S., Tanana, M., & Lyhyaoui, A. (2022). Validity of a graph-based automatic assessment system for programming assignments: Human versus automatic grading. International Journal of Electrical and Computer Engineering, 12(3), 2867–2875. https://doi.org/10.11591/ijece.v12i3.pp2867-2875

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free