Deep residual deconvolutional networks for defocus blur detection

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Accurate defocus blur detection has instigated wide research interest for the last few years. However, it is still a meaningful yet challenging machine vision task, and most methods rely on prior knowledge. Convolutional neural networks have proved the huge success for different tasks within the computer vision, and machine learning flew. A simple yet effective method of defocus blur detection was proposed in this paper, which by applying the deep residual convolutional encoder-decoder network. The aims of DRDN is to automatically generate pixel-level predictions for defocus blur images, and reconstruct output detection results of the same size as the input, which by performing several deconvolution operations at multiple scales through the transposed convolution, and skip connection. Afterwards, we used the slide window detection strategy and traversed the input image with a certain stride. Experiments on challenging benchmarks of defocus blur detection show that our algorithm achieved state-of-the-art performance, and powerfully balanced the detection accuracy, and detection time.

Cite

CITATION STYLE

APA

Zeng, K., Wang, Y., Mao, J., & Zhou, X. (2021). Deep residual deconvolutional networks for defocus blur detection. IET Image Processing, 15(3), 724–734. https://doi.org/10.1049/ipr2.12057

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free