Automatic grading systems help lessen the load of manual grading. Most existent autograders are based on unit testing, which focuses on the correctness of the code, but has limited scope for judging code quality. Moreover, it is cumbersome to implement unit testing for evaluating graphical output code. We propose an autograder that can effectively judge the code quality of the visual output codes created by students enrolled in a high school-level computational thinking course. We aim to provide suggestions to teachers on an essential aspect of their grading, namely the level of student com-petency in using abstraction within their codes. A dataset from five different assignments, including open-ended problems, is used to evaluate the effectiveness of our autograder. Our initial experiments show that our method can classify the students' submissions even for open-ended problems, where existing autograders fail to do so. Additionally, survey responses from course teachers support the importance of our work.
CITATION STYLE
Tisha, S. M., Oregon, R. A., Baumgartner, G., Alegre, F., & Moreno, J. (2022). An Automatic Grading System for a High School-level Computational Thinking Course. In Proceedings - 4th International Workshop on Software Engineering Education for the Next Generation, SEENG 2022 (pp. 20–27). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3528231.3528357
Mendeley helps you to discover research relevant for your work.