An ELU Network with Total Variation for Image Denoising

20Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a novel convolutional neural network (CNN) for image denoising, which uses exponential linear unit (ELU) as the activation function. We investigate the suitability by analyzing ELU’s connection with trainable nonlinear reaction diffusion model (TNRD) and residual denoising. On the other hand, batch normalization (BN) is indispensable for residual denoising and convergence purpose. However, direct stacking of BN and ELU degrades the performance of CNN. To mitigate this issue, we design an innovative combination of activation layer and normalization layer to exploit and leverage the ELU network, and discuss the corresponding rationale. Moreover, inspired by the fact that minimizing total variation (TV) can be applied to image denoising, we propose a TV regularized L2 loss to evaluate the training effect during the iterations. Finally, we conduct extensive experiments, showing that our model outperforms some recent and popular approaches on Gaussian denoising with specific or randomized noise levels for both gray and color images.

Cite

CITATION STYLE

APA

Wang, T., Qin, Z., & Zhu, M. (2017). An ELU Network with Total Variation for Image Denoising. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10636 LNCS, pp. 227–237). Springer Verlag. https://doi.org/10.1007/978-3-319-70090-8_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free