Multi-Scale Visual Attention Deep Convolutional Neural Network for Multi-Focus Image Fusion

98Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

To realize the multi-focus image fusion task, an end-to-end deep convolutional neural network (DCNN) model that produces the final fused image directly from the source images is presented in this paper. In order to promote the fusion precision, the innovative multi-focus fusion DCNN introduces a multi-scale feature extraction (MFE) unit to collect more complementary features from different spatial scales and fuse them to excavate more spatial information. Moreover, a visual attention unit is designed to help the network locate the focused region more accurately and pick more useful features for perfectly splicing the details in the fusion process. Experimental results illustrate that the proposed method is superior to several existing multi-focus image fusion methods in both of the subjective visual effects and objective quality metrics.

Cite

CITATION STYLE

APA

Lai, R., Li, Y., Guan, J., & Xiong, A. (2019). Multi-Scale Visual Attention Deep Convolutional Neural Network for Multi-Focus Image Fusion. IEEE Access, 7, 114385–114399. https://doi.org/10.1109/ACCESS.2019.2935006

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free