Abstract
Classifying software changes, i.e., commits into maintenance activities enables improved decision-making in software maintenance, thereby decreasing maintenance costs. Commonly, researchers have tried commit classification using keyword-based analysis of commit messages. Source code changes and density data have also been used for this purpose. Recent works have leveraged contextual semantic analysis of commit messages using pre-trained language models. But these approaches mostly depend on training data, making their ability to generalize a matter of concern. In this study, we explore the possibility of using in-context learning capabilities of large language models in commit classification. In-context learning does not require training data, making our approach less prone to data overfitting and more generalized. Experimental results using GPT-3 achieves a highest accuracy of 75.7% and kappa of 61.7%. It is similar to performances of other baseline models except one, highlighting the applicability of in-context learning in commit classification.
Author supplied keywords
Cite
CITATION STYLE
Sazid, Y., Kuri, S., Ahmed, K. S., & Satter, A. (2024). Commit Classification into Maintenance Activities Using In-Context Learning Capabilities of Large Language Models. In International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE - Proceedings (pp. 506–512). Science and Technology Publications, Lda. https://doi.org/10.5220/0012686700003687
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.