Deep Contextual Clinical Prediction with Reverse Distillation

15Citations
Citations of this article
57Readers
Mendeley users who have this article in their library.

Abstract

Healthcare providers are increasingly using machine learning to predict patient outcomes to make meaningful interventions. However, despite innovations in this area, deep learning models often struggle to match performance of shallow linear models in predicting these outcomes, making it difficult to leverage such techniques in practice. In this work, motivated by the task of clinical prediction from insurance claims, we present a new technique called reverse distillation which pretrains deep models by using high-performing linear models for initialization. We make use of the longitudinal structure of insurance claims datasets to develop Self Attention with Reverse Distillation, or SARD, an architecture that utilizes a combination of contextual embedding, temporal embedding and self-attention mechanisms and most critically is trained via reverse distillation. SARD outperforms state-of-the-art methods on multiple clinical prediction outcomes, with ablation studies revealing that reverse distillation is a primary driver of these improvements. Code is available at https://github.com/clinicalml/omop-learn.

Cite

CITATION STYLE

APA

Kodialam, R., Boiarsky, R., Lim, J., Sai, A., Dixit, N., & Sontag, D. (2021). Deep Contextual Clinical Prediction with Reverse Distillation. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 1, pp. 249–258). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i1.16099

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free