This paper shows that different “meta-model-checking” analyses can be conducted efficiently on a generic data structure we call a support set. Support sets may be viewed as abstract encodings of the “evidence” a model checker uses to justify the yes/no answers it computes. We indicate how model checkers may be modified to compute supports sets without compromising their time or space complexity. We also show how support sets may be used for a variety of different analyses of modelchecking results, including: the generation of diagnostic information for explaining negative model-checking results; and certifying the results of model checking (is the evidence internally consistent?).
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Tan, L., & Cleaveland, R. (2002). Evidence-based model checking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2404, pp. 455–470). Springer Verlag. https://doi.org/10.1007/3-540-45657-0_37