Infrared-visible image fusion based on convolutional neural networks (CNN)

14Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Image fusion is a process of combing multiple images of the same scene into a single image with the aim of preserving the full content information and retaining the important features from each of the original images. In this paper, a novel image fusion method based on Convolutional Neural Networks (CNN) and saliency detection is proposed. Here, we use the image representations derived from CNN Network optimized for infrared-visible image fusion. Since the lower layers of the network can seize the exact value of the original image, and the high layers of the network can capture the high-level content in terms of objects and their arrangement in the input image, we exploit more low-layer features of visible image and more high-layer features of infrared image in the fusion. And during the fusion procedure, the infrared target of an infrared image is effectively highlighted using saliency detection method and only the salient information of the infrared image will be fused. The method aimed to preserve the abundant detail information from visible image as much as possible, meanwhile preserve the salient information in the infrared image. Experimental results show that the proposed fusion method is rather promising.

Cite

CITATION STYLE

APA

Ren, X., Meng, F., Hu, T., Liu, Z., & Wang, C. (2018). Infrared-visible image fusion based on convolutional neural networks (CNN). In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11266 LNCS, pp. 301–307). Springer Verlag. https://doi.org/10.1007/978-3-030-02698-1_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free