SecurityEval dataset: Mining vulnerability examples to evaluate machine learning-based code generation techniques

19Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automated source code generation is currently a popular machine-learning-based task. It can be helpful for software developers to write functionally correct code from a given context. However, just like human developers, a code generation model can produce vulnerable code, which the developers can mistakenly use. For this reason, evaluating the security of a code generation model is a must. In this paper, we describe SecurityEval, an evaluation dataset to fulfill this purpose. It contains 130 samples for 75 vulnerability types, which are mapped to the Common Weakness Enumeration (CWE). We also demonstrate using our dataset to evaluate one open-source (i.e., InCoder) and one closed-source code generation model (i.e., GitHub Copilot).

Cite

CITATION STYLE

APA

Siddiq, M. L., & Santos, J. C. S. (2022). SecurityEval dataset: Mining vulnerability examples to evaluate machine learning-based code generation techniques. In MSR4P and S 2022 - Proceedings of the 1st International Workshop on Mining Software Repositories Applications for Privacy and Security, co-located with ESEC/FSE 2022 (pp. 29–33). Association for Computing Machinery, Inc. https://doi.org/10.1145/3549035.3561184

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free