Explaining counterexamples using causality

48Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification is complex, finding the failure in the trace becomes a non-trivial task. In this paper, we address the problem of analyzing a counterexample trace and highlighting the failure that it demonstrates. Using the notion of causality, introduced by Halpern and Pearl, we formally define a set of causes for the failure of the specification on the given counterexample trace. These causes are marked as red dots and presented to the user as a visual explanation of the failure. We study the complexity of computing the exact set of causes, and provide a polynomial-time algorithm that approximates it. This algorithm is implemented as a feature in the IBM formal verification platform RuleBase PE, where these visual explanations are an integral part of every counterexample trace. Our approach is independent of the tool that produced the counterexample, and can be applied as a light-weight external layer to any model checking tool, or used to explain simulation traces. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Beer, I., Ben-David, S., Chockler, H., Orni, A., & Trefler, R. (2009). Explaining counterexamples using causality. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5643 LNCS, pp. 94–108). https://doi.org/10.1007/978-3-642-02658-4_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free