Who tests the testers?: Avoiding the perils of automated testing

20Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Instructors routinely use automated assessment methods to evaluate the semantic qualities of student implementations and, sometimes, test suites. In this work, we distill a variety of automated assessment methods in the literature down to a pair of assessment models. We identify pathological assessment outcomes in each model that point to underlying methodological flaws. These theoretical flaws broadly threaten the validity of the techniques, and we actually observe them in multiple assignments of an introductory programming course. We propose adjustments that remedy these flaws and then demonstrate, on these same assignments, that our interventions improve the accuracy of assessment. We believe that with these adjustments, instructors can greatly improve the accuracy of automated assessment.

Cite

CITATION STYLE

APA

Wrenn, J., Krishnamurthi, S., & Fisler, K. (2018). Who tests the testers?: Avoiding the perils of automated testing. In ICER 2018 - Proceedings of the 2018 ACM Conference on International Computing Education Research (pp. 51–59). Association for Computing Machinery, Inc. https://doi.org/10.1145/3230977.3230999

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free