Improving the Performance of Image Fusion Based on Visual Saliency Weight Map Combined with CNN

25Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Convolutional neural networks (CNN) with their deep feature extraction capability have recently been applied in numerous image fusion tasks. However, the image fusion of infrared and visible images leads to loss of fine details and degradation of contrast in the fused image. This deterioration in the image is associated with the conventional 'averaging' rule for base layer fusion and relatively large feature extraction by CNN. To overcome these problems, an effective fusion framework based on visual saliency weight map (VSWM) combined with CNN is proposed. The proposed framework first employs VSWM method to improve the contrast of an image under consideration. Next, the fine details in the image are preserved by applying multi-resolution singular value decomposition (MSVD) before further processing by CNN. The promising experimental results show that the proposed method outperforms state-of-the-art methods by scoring the highest over different evaluation metrics such as Q0, multiscale structural similarity (MS_SSIM), and the sum of correlations of differences (SCD).

Cite

CITATION STYLE

APA

Yan, L., Cao, J., Rizvi, S., Zhang, K., Hao, Q., & Cheng, X. (2020). Improving the Performance of Image Fusion Based on Visual Saliency Weight Map Combined with CNN. IEEE Access, 8, 59976–59986. https://doi.org/10.1109/ACCESS.2020.2982712

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free