Incremental test case generation using bounded model checking: an application to automatic rating

4Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper we focus on the task of rating solutions to a programming exercise. State-of-the-art rating methods generally examine each solution against an exhaustive set of test cases, typically designed manually. Hence an issue of completeness arises. We propose the application of bounded model checking to the automatic generation of test cases. The experimental evaluation we have performed reveals a substantial increase in accuracy of ratings at a cost of a moderate increase in computation resources needed. Most importantly, application of model checking leads to the finding of errors in solutions that would previously have been classified as correct.

Cite

CITATION STYLE

APA

Anielak, G., Jakacki, G., & Lasota, S. (2015). Incremental test case generation using bounded model checking: an application to automatic rating. International Journal on Software Tools for Technology Transfer, 17(3), 339–349. https://doi.org/10.1007/s10009-014-0317-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free