Can automated adaptive feedback for correcting erroneous programs help novice programmers learn to code better? In a largescale experiment, we compare student performance when tutored by human tutors, and when receiving automated adaptive feedback. The automated feedback was designed using one of twowell-known instructional principles: (i) presenting the correct solution for the immediate problem, or (ii) presenting generated examples or analogies that guide towards the correct solution. We report empirical results from a large-scale (N = 480, 10,000+ person hour) experiment assessing the efficacy of these automated compilation-error feedback tools. Using the survival analysis on error rates of students measured over seven weeks, we found that automated feedback allows students to resolve errors in their code more efficiently than students receiving manual feedback. However, we also found that this advantage is primarily logistical and not conceptual; the performance benefit seen during lab assignments disappeared during exams wherein feedback of any kind was withdrawn. We further found that the performance advantage of automated feedback over human tutors increases with problem complexity, and that feedback via example and specific repair have distinct, non-overlapping relative advantages for different categories of programming errors. Our results offer a clear and granular delimitation of the pedagogical benefits of automated feedback in teaching programming to novices.
CITATION STYLE
Ahmed, U. Z., Srivastava, N., Sindhgatta, R., & Karkare, A. (2020). Characterizing the pedagogical benefits of adaptive feedback for compilation errors by novice programmers. In Proceedings - International Conference on Software Engineering (pp. 139–150). IEEE Computer Society. https://doi.org/10.1145/3377814.3381703
Mendeley helps you to discover research relevant for your work.