Image segmentation is critical to lots of medical applications. While deep learning (DL) methods continue to improve performance for many medical image segmentation tasks, data annotation is a big bottleneck to DL-based segmentation because (1) DL models tend to need a large amount of labeled data to train, and (2) it is highly time-consuming and label-intensive to voxel-wise label 3D medical images. Significantly reducing annotation effort while attaining good performance of DL segmentation models remains a major challenge. In our preliminary experiments, we observe that, using partially labeled datasets, there is indeed a large performance gap with respect to using fully annotated training datasets. In this paper, we propose a new DL framework for reducing annotation effort and bridging the gap between full annotation and sparse annotation in 3D medical image segmentation. We achieve this by (i) selecting representative slices in 3D images that minimize data redundancy and save annotation effort, and (ii) self-training with pseudo-labels automatically generated from the base-models trained using the selected annotated slices. Extensive experiments using two public datasets (the HVSMR 2016 Challenge dataset and mouse piriform cortex dataset) show that our framework yields competitive segmentation results comparing with state-of-the-art DL methods using less than ∼ 20% of annotated data.
CITATION STYLE
Zheng, H., Zhang, Y., Yang, L., Wang, C., & Chen, D. Z. (2020). An annotation sparsification strategy for 3D medical image segmentation via representative selection and self-training. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6925–6932). AAAI press. https://doi.org/10.1609/aaai.v34i04.6175
Mendeley helps you to discover research relevant for your work.