Combining LSTM and DenseNet for Automatic Annotation and Classification of Chest X-Ray Images

20Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The chest X-ray is a simple and economical medical aid for auxiliary diagnosis and therefore has become a routine item for residents' physical examinations. Based on 40167 images of chest radiographs and corresponding reports, we explore the abnormality classification problem of chest X-rays by taking advantage of deep learning techniques. First of all, since the radiology reports are generally templatized by the aberrant physical regions, we propose an annotation method according to the abnormal part in the images. Second, building on a small number of reports that are manually annotated by professional radiologists, we employ the long short-term memory (LSTM) model to automatically annotate the remaining unlabeled data. The result shows that the precision value reaches 0.88 in accurately annotating images, the recall value reaches 0.85, and the F1-score reaches 0.86. Finally, we classify the abnormality in the chest X-rays by training convolutional neural networks, and the results show that the average AUC value reaches 0.835.

Cite

CITATION STYLE

APA

Yan, F., Huang, X., Yao, Y., Lu, M., & Li, M. (2019). Combining LSTM and DenseNet for Automatic Annotation and Classification of Chest X-Ray Images. IEEE Access, 7, 74181–74189. https://doi.org/10.1109/ACCESS.2019.2920397

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free