Abstract
As computing education makes its way into schools, there is still little research on how to assess the learning of algorithms and programming concepts as a central topic. Furthermore, in order to ensure valid instructional feedback, an important concern is the reliability and construct validity of an assessment model. Therefore, this work presents a large-scale evaluation of the CodeMaster rubric for the performance-based assessment of algorithms and programming concepts by analyzing software artifacts created by students as part of complex, open-ended learning activities. The assessment is automated through a webbased tool that performs a static analysis of the source code of App Inventor projects. Based on 88,812 projects from the App Inventor Gallery, we statistically analyzed the reliability and construct validity of the rubric. Results indicate that the rubric can be regarded as reliable (Cronbach's alpha a=0.84). With respect to construct validity, there also exists an indication of convergent validity based on the results of a correlation and factor analysis. This indicates that the rubric can be used for a valid assessment of algorithm and programming concepts of App Inventor programs as part of a comprehensive assessment completed by other assessment methods. The results can guide the improvement of assessment models, as well as support the decision on the application of the rubric in order to support computing education in K-12.
Author supplied keywords
Cite
CITATION STYLE
Alves, N. D. C., Wangenheim, C. G. V., Hauck, J. C. R., & Borgatto, A. F. (2020). A large-scale evaluation of a rubric for the automatic assessment of algorithms and programming concepts. In SIGCSE 2020 - Proceedings of the 51st ACM Technical Symposium on Computer Science Education (pp. 556–562). https://doi.org/10.1145/3328778.3366840
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.