Improved 3D U-Net for COVID-19 Chest CT Image Segmentation

27Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Coronavirus disease 2019 (COVID-19) has spread rapidly worldwide. The rapid and accurate automatic segmentation of COVID-19 infected areas using chest computed tomography (CT) scans is critical for assessing disease progression. However, infected areas have irregular sizes and shapes. Furthermore, there are large differences between image features. We propose a convolutional neural network, named 3D CU-Net, to automatically identify COVID-19 infected areas from 3D chest CT images by extracting rich features and fusing multiscale global information. 3D CU-Net is based on the architecture of 3D U-Net. We propose an attention mechanism for 3D CU-Net to achieve local cross-channel information interaction in an encoder to enhance different levels of the feature representation. At the end of the encoder, we design a pyramid fusion module with expanded convolutions to fuse multiscale context information from high-level features. The Tversky loss is used to resolve the problems of the irregular size and uneven distribution of lesions. Experimental results show that 3D CU-Net achieves excellent segmentation performance, with Dice similarity coefficients of 96.3% and 77.8% in the lung and COVID-19 infected areas, respectively. 3D CU-Net has high potential to be used for diagnosing COVID-19.

Cite

CITATION STYLE

APA

Zheng, R., Zheng, Y., & Dong-Ye, C. (2021). Improved 3D U-Net for COVID-19 Chest CT Image Segmentation. Scientific Programming, 2021. https://doi.org/10.1155/2021/9999368

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free