Evaluating the effectiveness of local explanation methods on source code-based defect prediction models

7Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Interpretation has been considered as one of key factors for applying defect prediction in practice. As one way for interpretation, local explanation methods has been widely used for certain predictions on datasets of traditional features. There are also attempts to use local explanation methods on source code-based defect prediction models, but unfortunately, it will get poor results. Since it is unclear how effective those local explanation methods are, we evaluate such methods with automatic metrics which focus on local faithfulness and explanation precision. Based on the results of experiments, we find that the effectiveness of local explanation methods depends on the adopted defect prediction models. They are effective on token frequency-based models, while they may not be effective enough to explain all predictions of deep learning-based models. Besides, we also find that the hyperparameter of local explanation methods should be carefully optimized to get more precise and meaningful explanation.

Cite

CITATION STYLE

APA

Gao, Y., Zhu, Y., & Yu, Q. (2022). Evaluating the effectiveness of local explanation methods on source code-based defect prediction models. In Proceedings - 2022 Mining Software Repositories Conference, MSR 2022 (pp. 640–645). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3524842.3528472

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free