Can OpenAI's Codex Fix Bugs?: An evaluation on QuixBugs

58Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.
Get full text

Abstract

OpenAI's Codex, a GPT-3like model trained on a large code corpus, has made headlines in and outside of academia. Given a short user-provided description, it is capable of synthesizing code snippets that are syntactically and semantically valid in most cases. In this work, we want to investigate whether Codex is able to localize and fix bugs, two important tasks in automated program repair. Our initial evaluation uses the multi-language QuixBugs benchmark (40 bugs in both Python and Java). We find that, despite not being trained for APR, Codex is surprisingly effective, and competitive with recent state of the art techniques. Our results also show that Codex is more successful at repairing Python than Java, fixing 50% more bugs in Python.

Cite

CITATION STYLE

APA

Prenner, J. A., Babii, H., & Robbes, R. (2022). Can OpenAI’s Codex Fix Bugs?: An evaluation on QuixBugs. In Proceedings - International Workshop on Automated Program Repair, APR 2022 (pp. 69–75). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3524459.3527351

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free