Multimodal medical image fusion with convolution sparse representation and mutual information correlation in NSST domain

24Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Multimodal medical image is an effective method to solve a series of clinical problems, such as clinical diagnosis and postoperative treatment. In this study, a medical image fusion method based on convolutional sparse representation (CSR) and mutual information correlation is proposed. In this method, the source image is decomposed into one high-frequency and one low-frequency sub-band by non-subsampled shearlet transform. For the high-frequency sub-band, CSR is used for high-frequency coefficient fusion. For the low-frequency sub-band, different fusion strategies are used for different regions by mutual information correlation analysis. Analysis of two kinds of medical image fusion problems, namely, CT–MRI and MRI–SPECT, reveals that the performance of this method is robust in terms of five common objective metrics. Compared with the other six advanced medical image fusion methods, the experimental results show that the proposed method achieves better results in subjective vision and objective evaluation metrics.

Cite

CITATION STYLE

APA

Guo, P., Xie, G., Li, R., & Hu, H. (2023). Multimodal medical image fusion with convolution sparse representation and mutual information correlation in NSST domain. Complex and Intelligent Systems, 9(1), 317–328. https://doi.org/10.1007/s40747-022-00792-9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free