Comprehensive Evaluation of Static Analysis Tools for Their Performance in Finding Vulnerabilities in Java Code

2Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Various static code analysis tools have been designed to automatically detect software faults and security vulnerabilities. This paper aims to 1) conduct an empirical evaluation to assess the performance of five free and state-of-the-art static analysis tools in detecting Java security vulnerabilities using a well-defined and repeatable approach; 2) report on the vulnerabilities that are best and worst detected by static Java analyzers. We used the Juliet benchmark test suite in a controlled experiment to assess the effectiveness of five widely used Java static analysis tools. The vulnerabilities were successfully detected by one, two, or three tools. Only one vulnerability has been detected by four tools. The tools missed 13% of the Java vulnerability categories appearing in our experiment. More critically, none of the five tools could identify all the vulnerabilities in our experiment. We conclude that, despite recent improvements in their methodologies, current state-of-the-art static analysis tools are still ineffective for identifying the security vulnerabilities occurring in a small-scale, artificial test suite.

Cite

CITATION STYLE

APA

Alqaradaghi, M., & Kozsik, T. (2024). Comprehensive Evaluation of Static Analysis Tools for Their Performance in Finding Vulnerabilities in Java Code. IEEE Access, 12, 55824–55842. https://doi.org/10.1109/ACCESS.2024.3389955

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free