ExplainHoudini: Making Houdini inference transparent

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Houdini is a simple yet scalable technique for annotation inference for modular contract checking. The input to Houdini is a set of candidate annotations, and the output is a consistent subset of these candidates. Since this technique is most useful as an annotation assistant for user-guided refinement of annotations, understanding the reason for the removal of annotations is crucial for a user to refine the set of annotations, and classify false errors easily. This is especially true for applying Houdini to large legacy modules with thousands of procedures and deep call chains. In this work we present a method ExplainHoudini that explains the reason why a given candidate was removed, purely in terms of the existing candidates. We have implemented this algorithm and provide preliminary experience of applying it on large modules. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Lahiri, S. K., & Vanegue, J. (2011). ExplainHoudini: Making Houdini inference transparent. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6538 LNCS, pp. 309–323). https://doi.org/10.1007/978-3-642-18275-4_22

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free