Tightening test coverage metrics: A case study in equivalence checking using k-induction

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a case study applying the k-induction method to equivalence checking of Simulink designs. In particular, we are interested in the problem of equivalence detection in mutation-based testing: given a design S, determining whether a "mutant" design S′ derived from S by syntactic fault injection is behaviourally equivalent to S. In this situation, efficient equivalence checking techniques are needed to avoid redundant and expensive search for test cases that observe differences between S and S′. We have integrated k-induction into our test case generation framework for Simulink. We show, using a selection of benchmarks, that k-induction can be effective in detecting equivalent mutants, sometimes as a stand-alone technique, and sometimes with some manual assistance. We further discuss how the level of automation of the method can be increased by using static analysis to derive strengthening invariants from the structure of the Simulink models. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Donaldson, A. F., He, N., Kroening, D., & Rümmer, P. (2011). Tightening test coverage metrics: A case study in equivalence checking using k-induction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6957 LNCS, pp. 297–315). https://doi.org/10.1007/978-3-642-25271-6_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free