Data Race Detection Using Large Language Models

5Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Large language models (LLMs) are demonstrating significant promise as an alternate strategy to facilitate analyses and optimizations of high-performance computing programs, circumventing the need for resource-intensive manual tool creation. In this paper, we explore a novel LLM-based data race detection approach combining prompting engineering and fine-tuning techniques. We create a dedicated dataset named DRB-ML, which is derived from DataRaceBench, with fine-grain labels showing the presence of data race pairs and their associated variables, line numbers, and read/write information. DRB-ML is then used to evaluate representative LLMs and fine-tune open-source ones. Our experiment shows that LLMs can be a viable approach to data race detection. However, they still cannot compete with traditional data race detection tools when we need detailed information about variable pairs causing data races.

Cite

CITATION STYLE

APA

Chen, L., Ding, X., Emani, M., Vanderbruggen, T., Lin, P. H., & Liao, C. (2023). Data Race Detection Using Large Language Models. In ACM International Conference Proceeding Series (pp. 215–223). Association for Computing Machinery. https://doi.org/10.1145/3624062.3624088

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free