Guided Deep Decoder: Unsupervised Image Pair Fusion

58Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The fusion of input and guidance images that have a tradeoff in their information (e.g., hyperspectral and RGB image fusion or pansharpening) can be interpreted as one general problem. However, previous studies applied a task-specific handcrafted prior and did not address the problems with a unified approach. To address this limitation, in this study, we propose a guided deep decoder network as a general prior. The proposed network is composed of an encoder-decoder network that exploits multi-scale features of a guidance image and a deep decoder network that generates an output image. The two networks are connected by feature refinement units to embed the multi-scale features of the guidance image into the deep decoder network. The proposed network allows the network parameters to be optimized in an unsupervised way without training data. Our results show that the proposed network can achieve state-of-the-art performance in various image fusion problems.

Cite

CITATION STYLE

APA

Uezato, T., Hong, D., Yokoya, N., & He, W. (2020). Guided Deep Decoder: Unsupervised Image Pair Fusion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12351 LNCS, pp. 87–102). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58539-6_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free