Multi-Modal Image Fusion via Sparse Representation and Multi-Scale Anisotropic Guided Measure

6Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The multi-modal image fusion plays an important role in various fields. In this paper, a novel multi-modal image fusion method based on robust principal component analysis (RPCA) is proposed, which consists of low-rank components fusion and sparse components fusion. In the low-rank components fusion part, a universal low-rank dictionary is constructed for sparse representation (SR) and the low-rank fusion is converted to sparse coefficients fusion by adopting the batch-OMP. In the sparse components fusion part, the anisotropic weight map is constructed to express salient structures of the images. Moreover, a multi-scale anisotropic guided measure is proposed to guide the fusion process, which can extract and preserve the scale-aware salient details of sparse components. Finally, the multi-modal fusion can be achieved by combining two fusion parts together. The experimental results validate that the proposed method outperforms nine state-of-the-art methods in multi-modal fusion both at gray-gray and gray-color scales, in terms of qualitative and quantitative evaluations.

Cite

CITATION STYLE

APA

Zhang, S., Huang, F., Zhong, H., Liu, B., Chen, Y., & Wang, Z. (2020). Multi-Modal Image Fusion via Sparse Representation and Multi-Scale Anisotropic Guided Measure. IEEE Access, 8, 35638–35649. https://doi.org/10.1109/ACCESS.2020.2973269

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free