What went wrong: Explaining counterexamples

151Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the chief advantages of model checking is the production of counterexamples demonstrating that a system does not satisfy a specification. However, it may require a great deal of human effort to extract the essence of an error from even a detailed source-level trace of a failing run. We use an automated method for finding multiple versions of an error (and similar executions that do not produce an error), and analyze these executions to produce a more succinct description of the key elements of the error. The description produced includes identification of portions of the source code crucial to distinguishing failing and succeeding runs, differences in invariants between failing and nonfailing runs, and information on the necessary changes in scheduling and environmental actions needed to cause successful runs to fail. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Groce, A., & Visser, W. (2003). What went wrong: Explaining counterexamples. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2648, 121–135. https://doi.org/10.1007/3-540-44829-2_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free