Comparison of Chest Radiograph Captions Based on Natural Language Processing vs Completed by Radiologists

8Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Importance: Artificial intelligence (AI) can interpret abnormal signs in chest radiography (CXR) and generate captions, but a prospective study is needed to examine its practical value. Objective: To prospectively compare natural language processing (NLP)-generated CXR captions and the diagnostic findings of radiologists. Design, Setting, and Participants: A multicenter diagnostic study was conducted. The training data set included CXR images and reports retrospectively collected from February 1, 2014, to February 28, 2018. The retrospective test data set included consecutive images and reports from April 1 to July 31, 2019. The prospective test data set included consecutive images and reports from May 1 to September 30, 2021. Exposures: A bidirectional encoder representation from a transformers model was used to extract language entities and relationships from unstructured CXR reports to establish 23 labels of abnormal signs to train convolutional neural networks. The participants in the prospective test group were randomly assigned to 1 of 3 different caption generation models: a normal template, NLP-generated captions, and rule-based captions based on convolutional neural networks. For each case, a resident drafted the report based on the randomly assigned captions and an experienced radiologist finalized the report blinded to the original captions. A total of 21 residents and 19 radiologists were involved. Main Outcomes and Measures: Time to write reports based on different caption generation models. Results: The training data set consisted of 74 082 cases (39 254 [53.0%] women; mean [SD] age, 50.0 [17.1] years). In the retrospective (n = 8126; 4345 [53.5%] women; mean [SD] age, 47.9 [15.9] years) and prospective (n = 5091; 2416 [47.5%] women; mean [SD] age, 45.1 [15.6] years) test data sets, the mean (SD) area under the curve of abnormal signs was 0.87 (0.11) in the retrospective data set and 0.84 (0.09) in the prospective data set. The residents' mean (SD) reporting time using the NLP-generated model was 283 (37) seconds - significantly shorter than the normal template (347 [58] seconds; P

Cite

CITATION STYLE

APA

Zhang, Y., Liu, M., Zhang, L., Wang, L., Zhao, K., Hu, S., … Xie, X. (2023). Comparison of Chest Radiograph Captions Based on Natural Language Processing vs Completed by Radiologists. JAMA Network Open, 6(2). https://doi.org/10.1001/jamanetworkopen.2022.55113

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free