A Study of Vulnerability Repair in JavaScript Programs with Large Language Models

11Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, JavaScript has become the most widely used programming language, especially in web development. However, writing secure JavaScript code is not trivial, and programmers often make mistakes that lead to security vulnerabilities in web applications. Large Language Models (LLMs) have demonstrated substantial advancements across multiple domains, and their evolving capabilities indicate their potential for automatic code generation based on a required specification, including automatic bug fixing. In this study, we explore the accuracy of LLMs, namely ChatGPT and Bard, in finding and fixing security vulnerabilities in JavaScript programs. We also investigate the impact of context in a prompt on directing LLMs to produce a correct patch of vulnerable JavaScript code. Our experiments on real-world software vulnerabilities show that while LLMs are promising in automatic program repair of JavaScript code, achieving a correct bug fix often requires an appropriate amount of context in the prompt.

Cite

CITATION STYLE

APA

Le, T. K., Alimadadi, S., & Ko, S. Y. (2024). A Study of Vulnerability Repair in JavaScript Programs with Large Language Models. In WWW 2024 Companion - Companion Proceedings of the ACM Web Conference (pp. 666–669). Association for Computing Machinery, Inc. https://doi.org/10.1145/3589335.3651463

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free