In this paper we focus on the task of rating solutions to a programming exercise. State-of-the-art rating methods generally examine each solution against an exhaustive set of test cases, typically designed manually. Hence an issue of completeness arises. We propose the application of bounded model checking to the automatic generation of test cases. The experimental evaluation we have performed reveals a substantial increase in accuracy of ratings at a cost of a moderate increase in computation resources needed. Most importantly, application of model checking leads to the finding of errors in solutions that would previously have been classified as correct.
CITATION STYLE
Anielak, G., Jakacki, G., & Lasota, S. (2015). Incremental test case generation using bounded model checking: an application to automatic rating. International Journal on Software Tools for Technology Transfer, 17(3), 339–349. https://doi.org/10.1007/s10009-014-0317-2
Mendeley helps you to discover research relevant for your work.