An Adaptive Two-Scale Image Fusion of Visible and Infrared Images

21Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we proposed an adaptive two-scale image fusion method using latent low-rank representation (LatLRR). First, both IR and VI images are decomposed into a two-scale representation using LatLRR to generate low-rank parts (the global structure) and saliency parts (the local structure). The algorithm denoises at the same time. Then, the guided filter is used in the saliency parts to make full use of the spatial consistency, which reduces artifacts effectively. With respect to the fusion rule of the low-rank parts, we construct adaptive weights by adopting fusion global-local-topology particle swarm optimization (FGLT-PSO) to obtain more useful information from the source images. Finally, the resulting image is reconstructed by adding the fused low-rank part and the fused saliency part. The experimental results validate that the proposed method outperforms several representative image fusion algorithms on publicly available datasets for infrared and visible image fusion in terms of subjective visual effect and objective assessment.

Cite

CITATION STYLE

APA

Han, X., Lv, T., Song, X., Nie, T., Liang, H., He, B., & Kuijper, A. (2019). An Adaptive Two-Scale Image Fusion of Visible and Infrared Images. IEEE Access, 7, 56341–56352. https://doi.org/10.1109/ACCESS.2019.2913289

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free