Prioritizing alerts from multiple static analysis tools, using classification models

19Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

Static analysis (SA) tools examine code for flaws without executing the code, and produce warnings ("alerts") about possible flaws. A human auditor then evaluates the validity of the purported code flaws. The effort required to manually audit all alerts and repair all confirmed code flaws is often too much for a project's budget and schedule. An alert triaging tool enables strategically prioritizing alerts for examination, and could use classifier confidence. We developed and tested classification models that predict if static analysis alerts are true or false positives, using a novel combination of multiple static analysis tools, features from the alerts, alert fusion, code base metrics, and archived audit determinations. We developed classifiers using a partition of the data, then evaluated the performance of the classifier using standard measurements, including specificity, sensitivity, and accuracy. Test results and overall data analysis show accurate classifiers were developed, and specifically using multiple SA tools increased classifier accuracy, but labeled data for many types of flaws were inadequately represented (if at all) in the archive data, resulting in poor predictive accuracy for many of those flaws.

Cite

CITATION STYLE

APA

Flynn, L., Snavely, W., Svoboda, D., VanHoudnos, N., Qin, R., Burns, J., … Marce-Santurio, G. (2018). Prioritizing alerts from multiple static analysis tools, using classification models. In Proceedings - International Conference on Software Engineering (pp. 13–20). IEEE Computer Society. https://doi.org/10.1145/3194095.3194100

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free