External validation of a “black-box” clinical predictive model in nephrology: can interpretability methods help illuminate performance differences?

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The number of machine learning clinical prediction models being published is rising, especially as new fields of application are being explored in medicine. Notwithstanding these advances, only few of such models are actually deployed in clinical contexts for a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients when applied to an external cohort of a German research hospital. To help account for the performance differences observed, we utilized interpretability methods which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. We argue that such methods should be considered by practitioners as a further tool to help explain performance differences and inform model update in validation studies.

Cite

CITATION STYLE

APA

da Cruz, H. F., Pfahringer, B., Schneider, F., Meyer, A., & Schapranow, M. P. (2019). External validation of a “black-box” clinical predictive model in nephrology: can interpretability methods help illuminate performance differences? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11526 LNAI, pp. 191–201). Springer Verlag. https://doi.org/10.1007/978-3-030-21642-9_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free