Stacking Networks Dynamically for Image Restoration Based on the Plug-and-Play Framework

5Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, stacked networks show powerful performance in Image Restoration, such as challenging motion deblurring problems. However, the number of stacking levels is a hyper-parameter fine-tuned manually, making the stacking levels static during training without theoretical explanations for optimal settings. To address this challenge, we leverage the iterative process of the traditional plug-and-play method to provide a dynamic stacked network for Image Restoration. Specifically, a new degradation model with a novel update scheme is designed to integrate the deep neural network as the prior within the plug-and-play model. Compared with static stacked networks, our models are stacked dynamically during training via iterations, guided by a solid mathematical explanation. Theoretical proof on the convergence of the dynamic stacking process is provided. Experiments on the noise dataset BSD68, Set12, and motion blur dataset GoPro demonstrate that our framework outperforms the state-of-the-art in terms of PSNR and SSIM score without extra training process.

Cite

CITATION STYLE

APA

Wang, H., Zhang, T., Yu, M., Sun, J., Ye, W., Wang, C., & Zhang, S. (2020). Stacking Networks Dynamically for Image Restoration Based on the Plug-and-Play Framework. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12358 LNCS, pp. 446–462). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58601-0_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free