A Learnable Variational Model for Joint Multimodal MRI Reconstruction and Synthesis

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generating multi-contrasts/modal MRI of the same anatomy enriches diagnostic information but is limited in practice due to excessive data acquisition time. In this paper, we propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI using incomplete k-space data of several source modalities as inputs. The output of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality. Our proposed model is formulated as a variational problem that leverages several learnable modality-specific feature extractors and a multimodal synthesis module. We propose a learnable optimization algorithm to solve this model, which induces a multi-phase network whose parameters can be trained using multi-modal MRI data. Moreover, a bilevel-optimization framework is employed for robust parameter training. We demonstrate the effectiveness of our approach using extensive numerical experiments.

Cite

CITATION STYLE

APA

Bian, W., Zhang, Q., Ye, X., & Chen, Y. (2022). A Learnable Variational Model for Joint Multimodal MRI Reconstruction and Synthesis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13436 LNCS, pp. 354–364). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16446-0_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free