Deep Orthogonal Transform Feature for Image Denoising

6Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Recently, CNN-based image denoising has been investigated and shows better performance than conventional vision based techniques. However, there are still a couple of limits that are weak partly in restoring image details like textured regions or produce other artifacts. In this paper, we introduce noise-separable orthogonal transform features into a neural denoising framework. We specifically choose wavelet and PCA as an orthogonal transform, which achieved a good denoising performance conventionally. In addition to spatial image signals, the orthogonal transform features (OTFs) are fed into a denoising network. For the guide of the denoising process, we also concatenate OTFs from the image denoised by the existing method. This can play a role of prior for learning a denoising process. It has been confirmed that our proposed multi-input network can achieve better denoising performance than other single-input networks.

Cite

CITATION STYLE

APA

Shin, Y. H., Park, M. J., Lee, O. Y., & Kim, J. O. (2020). Deep Orthogonal Transform Feature for Image Denoising. IEEE Access, 8, 66898–66909. https://doi.org/10.1109/ACCESS.2020.2986827

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free