Deep Learning and Explainable AI in Healthcare Using EHR

28Citations
Citations of this article
107Readers
Mendeley users who have this article in their library.
Get full text

Abstract

With the evolving time, Artificial Intelligence (AI) has proved to be of great assistance in the medical field. Rapid advancements led to the availability of technology which could predict many different diseases risks. Patients Electronic Health Records (EHR) contains all different kinds of medical data for each patient, for each medical visit. Now there are many predictive models like random forests, boosted trees which provide high accuracy but not end-to-end interpretability while the ones such as Naive-Bayes, logistic regression and single decision trees are intelligible enough but less accurate. These models are interpretable but they lack to see the temporal relationships in the characteristic attributes present in the EHR data. Eventually, the model accuracy is compromised. Interpretability of a model is essential in critical healthcare applications. Interpretability helps the medical personnel with explanations that build trust towards machine learning systems. This chapter contains the design and implementation of an Explainable Deep Learning System for Healthcare using EHR. In this chapter, use of an attention mechanism and Recurrent Neural Network(RNN) on EHR data has been discussed, for predicting heart failure of patients and providing insight into the key diagnoses that have led to the prediction. The patient’s medical history is given as a sequential input to the RNN which predicts the heart failure risk and provides explainability along with it. This represents an ante-hoc explainability model. A neural network having two levels and attention model is trained for detecting those visits of the patient in his history that could be influential and significant to understand the reasons behind any prediction done on the medical history of the patient data. Thus, considering the last visit first proves to be beneficial. When a prediction is made, the visit-level contribution is prioritized i.e. which visit contributes the most to the final prediction where each visit consists of multiple codes. This model can be helpful to medical persons for predicting the heart failure risks of patients with diseases they have been diagnosed with based on EHR. This model is then worked upon by local interpretable model-agnostic explanations (LIME) which provide the different features that positively and negatively contribute to heart failure risk.

Cite

CITATION STYLE

APA

Khedkar, S., Gandhi, P., Shinde, G., & Subramanian, V. (2020). Deep Learning and Explainable AI in Healthcare Using EHR. In Studies in Big Data (Vol. 68, pp. 129–148). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-33966-1_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free