EXAM: An Explainable Attention-based Model for COVID-19 Automatic Diagnosis

19Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The ongoing coronavirus disease 2019 (COVID-19) is still rapidly spreading and has caused over 7,000,000 infection cases and 400,000 deaths around the world. To come up with a fast and reliable COVID-19 diagnosis system, people seek help from machine learning area to establish computer-Aided diagnosis systems with the aid of the radiological imaging techniques, like X-ray imaging and computed tomography imaging. Although artificial intelligence based architectures have achieved great improvements in performance, most of the models are still seemed as a black box to researchers. In this paper, we propose an Explainable Attention-based Model (EXAM) for COVID-19 automatic diagnosis with convincing visual interpretation. We transform the diagnosis process with radiological images into an image classification problem differentiating COVID-19, normal and community-Acquired pneumonia (CAP) cases. Combining channel-wise and spatial-wise attention mechanism, the proposed approach can effectively extract key features and suppress irrelevant information. Experiment results and visualization indicate that EXAM outperforms recent state-of-Art models and demonstrate its interpretability.

Cite

CITATION STYLE

APA

Shi, W., Tong, L., Zhuang, Y., Zhu, Y., & Wang, M. D. (2020). EXAM: An Explainable Attention-based Model for COVID-19 Automatic Diagnosis. In Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB 2020. Association for Computing Machinery, Inc. https://doi.org/10.1145/3388440.3412455

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free