Neural Program Repair with Execution-based Backpropagation

94Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.

Abstract

Neural machine translation (NMT) architectures have achieved promising results for automatic program repair. Yet, they have the limitation of generating low-quality patches (e.g., not compilable patches). This is because the existing works only optimize a purely syntactic loss function based on characters and tokens without incorporating program-specific information during neural network weight optimization. In this paper, we propose a novel program repair model called RewardRepair. The core novelty of RewardRepair is to improve NMT-based program repair with a loss function based on program compilation and test execution information, rewarding the network to produce patches that compile and that do not overfit. We conduct several experiments to evaluate RewardRepair showing that it is feasible and effective to use compilation and test execution results to optimize the underlying neural repair model. RewardRepair correctly repairs 207 bugs over four benchmarks. we report on repair success for 121 bugs that are fixed for the first time in the literature. Also, RewardRepair produces up to 45.3% of compilable patches, an improvement over the 39% by the state-of-the-art.

Author supplied keywords

Cite

CITATION STYLE

APA

Ye, H., Martinez, M., & Monperrus, M. (2022). Neural Program Repair with Execution-based Backpropagation. In Proceedings - International Conference on Software Engineering (Vol. 2022-May, pp. 1506–1518). IEEE Computer Society. https://doi.org/10.1145/3510003.3510222

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free