Deep learning based multimodal brain tumor diagnosis

31Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Brain tumor segmentation plays an important role in the disease diagnosis. In this paper, we proposed deep learning frameworks, i.e. MvNet and SPNet, to address the challenges of multimodal brain tumor segmentation. The proposed multi-view deep learning framework (MvNet) uses three multi-branch fully-convolutional residual networks (Mb-FCRN) to segment multimodal brain images from different view-point, i.e. slices along x, y, z axis. The three sub-networks produce independent segmentation results and vote for the final outcome. The SPNet is a CNN-based framework developed to predict the survival time of patients. The proposed deep learning frameworks was evaluated on BraTS 17 validation set and achieved competing results for tumor segmentation While Dice scores of 0.88, 0.75 0.71 were achieved for whole tumor, enhancing tumor and tumor core, respectively, an accuracy of 0.55 was obtained for survival prediction.

Cite

CITATION STYLE

APA

Li, Y., & Shen, L. (2018). Deep learning based multimodal brain tumor diagnosis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10670 LNCS, pp. 149–158). Springer Verlag. https://doi.org/10.1007/978-3-319-75238-9_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free