Low Level Source Code Vulnerability Detection Using Advanced BERT Language Model

  • Alqarni M
  • Azim A
N/ACitations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In software security and reliability, automated vulnerability detection is an essential and compulsory task. Software needs to be tested and checked before it goes to the client for production. As technology changes rapidly, source code is also becoming massive. Thus the adequate accuracy of automated vulnerability detection has become very important to produce secure software and remove security concerns. According to previous research, a deep and recurrent neural network model can not satisfactorily test accuracy to detect all vulnerabilities. In this paper, we introduce experimental research on Bidirectional Encoder Representations Transformers (BERT), a state-of-the-art natural language processing model aimed to improve test accuracy, contributing to updates to the development of deep layers of the BERT model. As well, we balance and fine-tune the dataset of the model with improved parameters. This combination of changes achieves new levels of accuracy for the BERT model, with 99.30% test accuracy in detecting source code vulnerabilities. We have made our balanced dataset and advanced model publicly available for any research purposes.

Cite

CITATION STYLE

APA

Alqarni, M., & Azim, A. (2022). Low Level Source Code Vulnerability Detection Using Advanced BERT Language Model. Proceedings of the Canadian Conference on Artificial Intelligence. https://doi.org/10.21428/594757db.b85e6625

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free