A Deep Learning Framework for Segmenting Brain Tumors Using MRI and Synthetically Generated CT Images

24Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

Abstract

Multi-modal three-dimensional (3-D) image segmentation is used in many medical ap-plications, such as disease diagnosis, treatment planning, and image-guided surgery. Although multi-modal images provide information that no single image modality alone can provide, integrating such information to be used in segmentation is a challenging task. Numerous methods have been introduced to solve the problem of multi-modal medical image segmentation in recent years. In this paper, we propose a solution for the task of brain tumor segmentation. To this end, we first introduce a method of enhancing an existing magnetic resonance imaging (MRI) dataset by generating synthetic computed tomography (CT) images. Then, we discuss a process of systematic optimization of a convolutional neural network (CNN) architecture that uses this enhanced dataset, in order to customize it for our task. Using publicly available datasets, we show that the proposed method outperforms similar existing methods.

Cite

CITATION STYLE

APA

Islam, K. T., Wijewickrema, S., & O’leary, S. (2022). A Deep Learning Framework for Segmenting Brain Tumors Using MRI and Synthetically Generated CT Images. Sensors, 22(2). https://doi.org/10.3390/s22020523

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free