Z-ranking: Using statistical analysis to counter the impact of static analysis approximations

111Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper explores z-ranking, a technique to rank error reports emitted by static program checking analysis tools. Such tools often use approximate analysis schemes, leading to false error reports. These reports can easily render the error checker useless by hiding real errors amidst the false, and by potentially causing the tool to be discarded as irrelevant. Empirically, all tools that effectively find errors have false positive rates that can easily reach 30-100%. Z-ranking employs a simple statistical model to rank those error messages most likely to be true errors over those that are least likely. This paper demonstrates that z-ranking applies to a range of program checking problems and that it performs up to an order of magnitude better than randomized ranking. Further, it has transformed previously unusable analysis tools into effective program error finders. © Springer-Verlag Berlin Heidelberg 2003.

Cite

CITATION STYLE

APA

Kremenek, T., & Engler, D. (2003). Z-ranking: Using statistical analysis to counter the impact of static analysis approximations. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2694, 295–315. https://doi.org/10.1007/3-540-44898-5_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free