Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks

72Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The advancements in automated diagnostic tools allow researchers to obtain more and more information from medical images. Recently, to obtain more informative medical images, multi-modality images have been used. These images have significantly more information as compared to traditional medical images. However, the construction of multi-modality images is not an easy task. The proposed approach, initially, decomposes the image into sub-bands using a non-subsampled contourlet transform (NSCT) domain. Thereafter, an extreme version of the Inception (Xception) is used for feature extraction of the source images. The multi-objective differential evolution is used to select the optimal features. Thereafter, the coefficient of determination and the energy loss based fusion functions are used to obtain the fused coefficients. Finally, the fused image is computed by applying the inverse NSCT. Extensive experimental results show that the proposed approach outperforms the competitive multi-modality image fusion approaches.

Cite

CITATION STYLE

APA

Kaur, M., & Singh, D. (2021). Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks. Journal of Ambient Intelligence and Humanized Computing, 12(2), 2483–2493. https://doi.org/10.1007/s12652-020-02386-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free