Active learning technique for multimodal brain tumor segmentation using limited labeled images

9Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image segmentation is an essential step in biomedical image analysis. In recent years, deep learning models have achieved significant success in segmentation. However, deep learning requires the availability of large annotated data to train these models, which can be challenging in biomedical imaging domain. In this paper, we aim to accomplish biomedical image segmentation with limited labeled data using active learning. We present a deep active learning framework that selects additional data points to be annotated by combining U-Net with an efficient and effective query strategy to capture the most uncertain and representative points. This algorithm decouples the representative part by first finding the core points in the unlabeled pool and then selecting the most uncertain points from the reduced pool, which are different from the labeled pool. In our experiment, only 13% of the dataset was required with active learning to outperform the model trained on the entire 2018 MICCAI Brain Tumor Segmentation (BraTS) dataset. Thus, active learning reduced the amount of labeled data required for image segmentation without a significant loss in the accuracy.

Cite

CITATION STYLE

APA

Sharma, D., Shanis, Z., Reddy, C. K., Gerber, S., & Enquobahrie, A. (2019). Active learning technique for multimodal brain tumor segmentation using limited labeled images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11795 LNCS, pp. 148–156). Springer. https://doi.org/10.1007/978-3-030-33391-1_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free