Much work aims to explain a model's prediction on a static input. We consider explanations in a temporal setting where a stateful dynamical model produces a sequence of risk estimates given an input at each time step. When the estimated risk increases, the goal of the explanation is to attribute the increase to a few relevant inputs from the past. While our formal setup and techniques are general, we carry out an in-depth case study in a clinical setting. The goal here is to alert a clinician when a patient's risk of deterioration rises. The clinician then has to decide whether to intervene and adjust the treatment. Given a potentially long sequence of new events since she last saw the patient, a concise explanation helps her to quickly triage the alert. We develop methods to lift static attribution techniques to the dynamical setting, where we identify and address challenges specific to dynamics. We then experimentally assess the utility of different explanations of clinical alerts through expert evaluation.
CITATION STYLE
Hardt, M., Rajkomar, A., Flores, G., Dai, A., Howell, M., Corrado, G., … Hardt, M. (2020). Explaining an increase in predicted risk for clinical alerts. In ACM CHIL 2020 - Proceedings of the 2020 ACM Conference on Health, Inference, and Learning (pp. 80–89). Association for Computing Machinery, Inc. https://doi.org/10.1145/3368555.3384460
Mendeley helps you to discover research relevant for your work.