Interpretable Pneumonia Detection by Combining Deep Learning and Explainable Models with Multisource Data

29Citations
Citations of this article
66Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With the rapid development of AI techniques, Computer-aided Diagnosis has attracted much attention and has been successfully deployed in many applications of health care and medical diagnosis. For some specific tasks, the learning-based system can compare with or even outperform human experts' performance. The impressive performance owes to the excellent expressiveness and scalability of the neural networks, although the models' intuition usually cannot be represented explicitly. Interpretability is, however, very important, even the same as the diagnosis precision, for computer-aided diagnosis. To fill this gap, our approach is intuitive to detect pneumonia interpretably. We first build a large dataset of community-acquired pneumonia consisting of 35389 cases (distinguished from nosocomial pneumonia) based on actual medical records. Second, we train a prediction model with the chest X-ray images in our dataset, capable of precisely detecting pneumonia. Third, we propose an intuitive approach to combine neural networks with an explainable model such as the Bayesian Network. The experiment result shows that our proposal further improves the performance by using multi-source data and provides intuitive explanations for the diagnosis results.

Cite

CITATION STYLE

APA

Ren, H., Wong, A. B., Lian, W., Cheng, W., Zhang, Y., He, J., … Zhang, H. (2021). Interpretable Pneumonia Detection by Combining Deep Learning and Explainable Models with Multisource Data. IEEE Access, 9, 95872–95883. https://doi.org/10.1109/ACCESS.2021.3090215

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free