Abstract
Consistency, scalability, and local stability properties ensure that a model or method produces reliable and predictable outcomes. The Shapash helps users understand how the model makes its decisions. With machine learning (ML) system, healthcare experts can identify individuals at higher risk and implement interventions to reduce the occurrence and severity of disease. ML had achieved higher prediction accuracy even though the accuracy of their prediction depends on the quality and quantity of the data used for training. Despite the wider application and higher accuracy of different ML for disease prediction, the explanation of their predictive outcome is much more important to the healthcare professional, the patient, and even their developers. However, most of the ML systems do not explain their outcomes. To address the explainability issue various techniques such as local model agnostic explanation (LIME), and shapley additive explanation (SHAP) have been proposed over the recent years. Furthermore, the consistency, local stability, and approximation of the explanation remained one of the research topics in ML. This study investigated the consistency, stability, and approximation of LIME and SHAP in predicting heart disease (HD). The result suggested that LIME and SHAP generated a similar explanation (distance=0.35), compared to the active coalition of variable (ACV) explanation (distance=0.43).
Author supplied keywords
Cite
CITATION STYLE
Assegie, T. A., Manivannan, B., Napa, K. K., Vijayammal, B. K. P., Govindarajan, R., Murugan, S., & Mekonnen, A. M. (2024). Consistency, local stability, and approximation of Shapash explanation. Telkomnika (Telecommunication Computing Electronics and Control), 22(3), 673–680. https://doi.org/10.12928/TELKOMNIKA.v22i3.25560
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.