Deep learning with test-time augmentation for radial endobronchial ultrasound image differentiation: a multicentre verification study

2Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Purpose Despite the importance of radial endobronchial ultrasound (rEBUS) in transbronchial biopsy, researchers have yet to apply artificial intelligence to the analysis of rEBUS images. Materials and methods This study developed a convolutional neural network (CNN) to differentiate between malignant and benign tumours in rEBUS images. This study retrospectively collected rEBUS images from medical centres in Taiwan, including 769 from National Taiwan University Hospital Hsin-Chu Branch, Hsinchu Hospital for model training (615 images) and internal validation (154 images) as well as 300 from National Taiwan University Hospital (NTUH-TPE) and 92 images were obtained from National Taiwan University Hospital Hsin-Chu Branch, Biomedical Park Hospital (NTUH-BIO) for external validation. Further assessments of the model were performed using image augmentation in the training phase and test-time augmentation (TTA). Results Using the internal validation dataset, the results were as follows: area under the curve (AUC) (0.88 (95% CI 0.83 to 0.92)), sensitivity (0.80 (95% CI 0.73 to 0.88)), specificity (0.75 (95% CI 0.66 to 0.83)). Using the NTUH-TPE external validation dataset, the results were as follows: AUC (0.76 (95% CI 0.71 to 0.80)), sensitivity (0.58 (95% CI 0.50 to 0.65)), specificity (0.92 (95% CI 0.88 to 0.97)). Using the NTUH-BIO external validation dataset, the results were as follows: AUC (0.72 (95% CI 0.64 to 0.82)), sensitivity (0.71 (95% CI 0.55 to 0.86)), specificity (0.76 (95% CI 0.64 to 0.87)). After fine-tuning, the AUC values for the external validation cohorts were as follows: NTUH-TPE (0.78) and NTUH-BIO (0.82). Our findings also demonstrated the feasibility of the model in differentiating between lung cancer subtypes, as indicated by the following AUC values: adenocarcinoma (0.70; 95% CI 0.64 to 0.76), squamous cell carcinoma (0.64; 95% CI 0.54 to 0.74) and small cell lung cancer (0.52; 95% CI 0.32 to 0.72). Conclusions Our results demonstrate the feasibility of the proposed CNN-based algorithm in differentiating between malignant and benign lesions in rEBUS images.

Author supplied keywords

References Powered by Scopus

Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries

75828Citations
N/AReaders
Get full text

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

15236Citations
N/AReaders
Get full text

A survey on deep learning in medical image analysis

9539Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Yu, K. L., Tseng, Y. S., Yang, H. C., Liu, C. J., Kuo, P. C., Lee, M. R., … Yu, C. J. (2023). Deep learning with test-time augmentation for radial endobronchial ultrasound image differentiation: a multicentre verification study. BMJ Open Respiratory Research, 10(1). https://doi.org/10.1136/bmjresp-2022-001602

Readers over time

‘23‘24‘25036912

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 3

60%

Professor / Associate Prof. 1

20%

Researcher 1

20%

Readers' Discipline

Tooltip

Social Sciences 3

38%

Medicine and Dentistry 2

25%

Computer Science 2

25%

Sports and Recreations 1

13%

Article Metrics

Tooltip
Mentions
News Mentions: 1
Social Media
Shares, Likes & Comments: 18

Save time finding and organizing research with Mendeley

Sign up for free
0