Boosting static analysis accuracy with instrumented test executions

15Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The two broad approaches to discover properties of programs-static and dynamic analyses-have complementary strengths: static techniques perform exhaustive exploration and prove upper bounds on program behaviors, while the dynamic analysis of test cases provides concrete evidence of these behaviors and promise low false alarm rates. In this paper, we present DynaBoost, a system which uses information obtained from test executions to prioritize the alarms of a static analyzer. We instrument the program to dynamically look for dataflow behaviors predicted by the static analyzer, and use these results to bootstrap a probabilistic alarm ranking system, where the user repeatedly inspects the alarm judged most likely to be a real bug, and where the system re-ranks the remaining alarms in response to user feedback. The combined system is able to exploit information that cannot be easily provided by users, and provides significant improvements in the human alarm inspection burden: by 35% compared to the baseline ranking system, and by 89% compared to an unaided programmer triaging alarm reports.

Cite

CITATION STYLE

APA

Chen, T., Heo, K., & Raghothaman, M. (2021). Boosting static analysis accuracy with instrumented test executions. In ESEC/FSE 2021 - Proceedings of the 29th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering (pp. 1154–1165). Association for Computing Machinery, Inc. https://doi.org/10.1145/3468264.3468626

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free