Sparse, Interpretable and Transparent Predictive Model Identification for Healthcare Data Analysis

7Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Data-driven modelling approaches play an indispensable role in analyzing and understanding complex processes. This study proposes a type of sparse, interpretable and transparent (SIT) machine learning model, which can be used to understand the dependent relationship of a response variable on a set of potential explanatory variables. An ideal candidate for such a SIT representation is the well-known NARMAX (nonlinear autoregressive moving average with exogenous inputs) model, which can be established from measured input and output data of the system of interest, and the final refined model is usually simple, parsimonious and easy to interpret. The performance of the proposed SIT models is evaluated through two real healthcare datasets.

Cite

CITATION STYLE

APA

Wei, H. L. (2019). Sparse, Interpretable and Transparent Predictive Model Identification for Healthcare Data Analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11506 LNCS, pp. 103–114). Springer Verlag. https://doi.org/10.1007/978-3-030-20521-8_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free