In this paper, we perform a comparative analysis using a within-subjects 'think-aloud' protocol of introductory programming students solving tracing problems in both paper-based and computer-based formats. We demonstrate that, on computer-based exams with compiler/interpreter access, students can achieve significantly higher scores on tracing problems than they do on similar paper-based questions, through brute-force execution of the provided code. Furthermore, we characterize the students' usage of machine execution as they solve computer-based tracing problems. We, then, suggest "reverse-tracing"questions, where a block of code is provided and students must identify an input that will produce a specified output, as a potential alternative means of assessing the same skill as tracing questions on such computer-based exams. Our initial investigation suggests correctly-designed reverse-tracing problems on computer-based exams more closely track a student's performance on similar questions in a paper-based format. In addition, we find that the thought process while solving tracing and reverse-tracing problems is similar, but not identical.
CITATION STYLE
Hassan, M., & Zilles, C. (2021). Exploring “reverse-tracing” Questions as a Means of Assessing the Tracing Skill on Computer-based CS 1 Exams. In ICER 2021 - Proceedings of the 17th ACM Conference on International Computing Education Research (pp. 115–126). Association for Computing Machinery, Inc. https://doi.org/10.1145/3446871.3469765
Mendeley helps you to discover research relevant for your work.