Assisting in Auditing of Buffer Overflow Vulnerabilities via Machine Learning

10Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Buffer overflow vulnerability is a kind of consequence in which programmers' intentions are not implemented correctly. In this paper, a static analysis method based on machine learning is proposed to assist in auditing buffer overflow vulnerabilities. First, an extended code property graph is constructed from the source code to extract seven kinds of static attributes, which are used to describe buffer properties. After embedding these attributes into a vector space, five frequently used machine learning algorithms are employed to classify the functions into suspicious vulnerable functions and secure ones. The five classifiers reached an average recall of 83.5%, average true negative rate of 85.9%, a best recall of 96.6%, and a best true negative rate of 91.4%. Due to the imbalance of the training samples, the average precision of the classifiers is 68.9% and the average F1 score is 75.2%. When the classifiers were applied to a new program, our method could reduce the false positive to 1/12 compared to Flawfinder.

Cite

CITATION STYLE

APA

Meng, Q., Feng, C., Zhang, B., & Tang, C. (2017). Assisting in Auditing of Buffer Overflow Vulnerabilities via Machine Learning. Mathematical Problems in Engineering, 2017. https://doi.org/10.1155/2017/5452396

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free