Labeling Chest X-Ray Reports Using Deep Learning

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the primary challenges in the development of Chest X-Ray (CXR) interpretation models has been the lack of large datasets with multilabel image annotations extracted from radiology reports. This paper proposes a CXR labeler that can simultaneously extracts fourteen observations from free-text radiology reports as positive or negative, abbreviated as CXRlabeler. It fine-tunes a pre-trained language model, AWD-LSTM, to the corpus of CXR radiology impressions and then uses it as the base of the multilabel classifier. Experimentation demonstrates that a language model fine-tuning increases the classifier F1 score by 12.53%. Overall, CXRlabeler achieves a 96.17% F1 score on the MIMIC-CXR dataset. To further test the generalization of the CXRlabeler model, it is tested on the PadChest dataset. This testing shows that the CXRlabeler approach is helpful in a different language environment, and the model (available at https://github.com/MaramMonshi/CXRlabeler ) can assist researchers in labeling CXR datasets with fourteen observations.

Cite

CITATION STYLE

APA

Monshi, M. M. A., Poon, J., Chung, V., & Monshi, F. M. (2021). Labeling Chest X-Ray Reports Using Deep Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12893 LNCS, pp. 684–694). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-86365-4_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free