A generative adversarial network for infrared and visible image fusion based on semantic segmentation

48Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes a new generative adversarial network for infrared and visible image fusion based on semantic segmentation (SSGAN), which can consider not only the low-level features of infrared and visible images, but also the high-level semantic information. Source images can be divided into foregrounds and backgrounds by semantic masks. The generator with a dual-encoder-single-decoder framework is used to extract the feature of foregrounds and backgrounds by different encoder paths. Moreover, the discriminator’s input image is designed based on semantic segmentation, which is obtained by combining the foregrounds of the infrared images with the backgrounds of the visible images. Consequently, the prominence of thermal targets in the infrared images and texture details in the visible images can be preserved in the fused images simultaneously. Qualitative and quantitative experiments on publicly available datasets demonstrate that the proposed approach can significantly outperform the state-of-the-art methods.

Cite

CITATION STYLE

APA

Hou, J., Zhang, D., Wu, W., Ma, J., & Zhou, H. (2021). A generative adversarial network for infrared and visible image fusion based on semantic segmentation. Entropy, 23(3). https://doi.org/10.3390/e23030376

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free