Segmentation and recognition of breast ultrasound images based on an expanded U-Net

43Citations
Citations of this article
55Readers
Mendeley users who have this article in their library.

Abstract

This paper establishes a fully automatic real-time image segmentation and recognition system for breast ultrasound intervention robots. It adopts the basic architecture of a U-shaped convolutional network (U-Net), analyses the actual application scenarios of semantic segmentation of breast ultrasound images, and adds dropout layers to the U-Net architecture to reduce the redundancy in texture details and prevent overfitting. The main innovation of this paper is proposing an expanded training approach to obtain an expanded of U-Net. The output map of the expanded U-Net can retain texture details and edge features of breast tumours. Using the grey-level probability labels to train the U-Net is faster than using ordinary labels. The average Dice coefficient (standard deviation) and the average IOU coefficient (standard deviation) are 90.5% (±0.02) and 82.7% (±0.02), respectively, when using the expanded training approach. The Dice coefficient of the expanded U-Net is 7.6 larger than that of a general U-Net, and the IOU coefficient of the expanded U-Net is 11 larger than that of the general U-Net. The context of breast ultrasound images can be extracted, and texture details and edge features of tumours can be retained by the expanded U-Net. Using an expanded U-Net can quickly and automatically achieve precise segmentation and multi-class recognition of breast ultrasound images.

Cite

CITATION STYLE

APA

Guo, Y., Duan, X., Wang, C., & Guo, H. (2021). Segmentation and recognition of breast ultrasound images based on an expanded U-Net. PLoS ONE, 16(6 June). https://doi.org/10.1371/journal.pone.0253202

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free