Abstract: Combining two medical images from different modalities is more helpful for using the resulting image in the healthcare field. Medical image fusion means combining two or more images coming from multiple sensors. This technology obtains an output image that presents more effective and useful information from two images. This paper proposes a multi-modal medical image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and pulse coupled neural networks (PCNN) methods. The input images are decomposed using the NSCT method into low- and high-frequency subbands. The PCNN is a fusion rule for integrating both low- and high-frequency subbands. The inverse of the NSCT method is to reconstruct the fused image. The results of medical image fusion help doctors with disease diagnosis and patient treatment. The proposed algorithm is tested on six groups of multi-modal medical images using 100 pairs of input images. The proposed algorithm is compared with eight fusion methods. We evaluate the performance of the proposed algorithm using the fusion metrics: peak signal to noise ratio (PSNR), mutual information (MI), entropy (EN), weighted edge information (QAB/F), nonlinear correlation information entropy (Qncie), standard deviation (SD), and average gradient (AG). Experimental results show that the proposed algorithm can perform better than other medical image fusion methods and achieve promising results. Graphical abstract: [Figure not available: see fulltext.]
CITATION STYLE
Ibrahim, S. I., Makhlouf, M. A., & El-Tawel, G. S. (2023). Multimodal medical image fusion algorithm based on pulse coupled neural networks and nonsubsampled contourlet transform. Medical and Biological Engineering and Computing, 61(1), 155–177. https://doi.org/10.1007/s11517-022-02697-8
Mendeley helps you to discover research relevant for your work.