The computer science community has struggled to assess student learning via Scratch programming at the primary school level (ages 7-12). Prior work has relied most heavily on artifact (student code/projects) analysis, with some attempts at one-on-one interviews and written assessments. In this paper, we explore the relationship between artifact analysis and written assessments. Through this study of a large-scale introductory computing implementation, we found that for students who had code in their projects, student performance on specific questions on the written assessments is only very weakly correlated to specific attributes of final projects typically used in artifact analysis as well as attributes we use to define candidate code (r < 0.2, p < 0.05). In particular, the correlation is not nearly strong enough to serve as a proxy for understanding.
CITATION STYLE
Salac, J., & Franklin, D. (2020). If They Build It, Will They Understand It? Exploring the Relationship between Student Code and Performance. In Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE (pp. 473–479). Association for Computing Machinery. https://doi.org/10.1145/3341525.3387379
Mendeley helps you to discover research relevant for your work.