Speeding Up Automated Assessment of Programming Exercises

1Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Introductory programming courses around the world use automatic assessment. Automatic assessment for programming code is typically performed via unit tests which require computation time to execute, at times in significant amounts, leading to computation costs and delay in feedback to students. We present a step-based approach for speeding up automated assessment to address the issue, consisting of (1) a cache of past programming exercise submissions and their associated test results to avoid retesting equivalent new submissions; (2) static analysis to detect e.g. infinite loops (heuristically) ; (3) a machine learning model to evaluate programs without running them ; and (4) a traditional set of unit tests. When a student submits code for an exercise, the code is evaluated sequentially through each step, providing feedback to the student at the earliest possible time, reducing the need to run tests. We evaluate the impact of the proposed approach using data collected from an introductory programming course and demonstrate a considerable reduction in the number of exercise submissions that require running the tests (up to 80% of exercises). Using the approach leads to faster feedback in a more sustainable way, and also provides opportunities for precise non-exercise specific feedback in steps (2) and (3).

Cite

CITATION STYLE

APA

Sarsa, S., Leinonen, J., Koutcheme, C., & Hellas, A. (2022). Speeding Up Automated Assessment of Programming Exercises. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3555009.3555013

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free