Image Motion Deblurring Based on Deep Residual Shrinkage and Generative Adversarial Networks

6Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A network structure (DRSN-GAN) is proposed for image motion deblurring that combines a deep residual shrinkage network (DRSN) with a generative adversarial network (GAN) to address the issues of poor noise immunity and low generalizability in deblurring algorithms based solely on GANs. First, an end-to-end approach is used to recover a clear image from a blurred image, without the need to estimate a blurring kernel. Next, a DRSN is used as the generator in a GAN to remove noise from the input image while learning residuals to improve robustness. The BN and ReLU layers in the DRSN were moved to the front of the convolution layer, making the network easier to train. Finally, deblurring performance was verified using the GoPro, Köhler, and Lai datasets. Experimental results showed that deblurred images were produced with more subjective visual effects and a higher objective evaluation, compared with algorithms such as MPRNet. Furthermore, image edge and texture restoration effects were improved along with image quality. Our model produced slightly higher PSNR and SSIM values than the latest MPRNet, as well as increased YOLO detection accuracy. The number of required parameters in the DRSN-GAN was also reduced by 21.89%.

Cite

CITATION STYLE

APA

Jiang, W., & Liu, A. (2022). Image Motion Deblurring Based on Deep Residual Shrinkage and Generative Adversarial Networks. Computational Intelligence and Neuroscience, 2022. https://doi.org/10.1155/2022/5605846

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free