Dense Multi-focus Fusion Net: A Deep Unsupervised Convolutional Network for Multi-focus Image Fusion

7Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we introduce a novel unsupervised deep learning (DL) method for multi-focus image fusion. Existing multi-focus image fusion (MFIF) methods based on DL treat MFIF as a classification problem with a massive amount of reference images to train networks. Instead, we proposed an end-to-end unsupervised DL model to fuse multi-focus color images without reference ground truth images. As compared to conventional CNN our proposed model only consists of convolutional layers to achieve a promising performance. In our proposed network, all layers in the feature extraction networks are connected to each other in a feed-forward way and aim to extract more useful common low-level features from multi-focus image pair. Instead of using conventional loss functions our model utilizes image structure similarity (SSIM) to calculate loss in the reconstruction process. Our proposed model can process variable size images during testing and validation. Experimental results on various test images validate that our proposed method achieves state-of-the-art performance in both subjective and objective evaluation metrics.

Cite

CITATION STYLE

APA

Mustafa, H. T., Liu, F., Yang, J., Khan, Z., & Huang, Q. (2019). Dense Multi-focus Fusion Net: A Deep Unsupervised Convolutional Network for Multi-focus Image Fusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11508 LNAI, pp. 153–163). Springer Verlag. https://doi.org/10.1007/978-3-030-20912-4_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free