Extracting Surrogate Decision Trees from Black-Box Models to Explain the Temporal Importance of Clinical Features in Predicting Kidney Graft Survival

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Prognostic modelling using machine learning techniques has been used to predict the risk of kidney graft failure after transplantation. Despite the clinically suitable prediction performance of the models, their decision logic cannot be interpreted by physicians, hindering clinical adoption. eXplainable Artificial Intelligence (XAI) is an emerging research discipline to investigate methods for explaining machine learning models which are regarded as ‘black-box’ models. In this paper, we present a novel XAI approach to study the influence of time on information gain of donor and recipient factors in kidney graft survival prediction. We trained the most accurate models regardless of their transparency level on subsequent non-overlapping temporal cohorts and extracted faithful decision trees from the models as global surrogate explanations. Comparative exploration of the decision trees reveals insightful information about how the information gain of the input features changes over time.

Cite

CITATION STYLE

APA

Rad, J., Tennankore, K. K., Vinson, A., & Abidi, S. S. R. (2022). Extracting Surrogate Decision Trees from Black-Box Models to Explain the Temporal Importance of Clinical Features in Predicting Kidney Graft Survival. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13263 LNAI, pp. 88–98). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-09342-5_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free