Denoising of image gradients and constrained total generalized variation

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We derive a denoising method that uses higher order derivative information. Our method is motivated by work on denoising of normal vectors to the image which then are used for a better denoising of the image itself. We propose to denoise image gradients instead of image normals, since this leads to a convex optimization problem. We show how the denoising of the image gradient and the image itself can be done simultaneously in one optimization problem. It turns out that the resulting problem is similar to total generalized variation denoising, thus shedding more light on the motivation of the total generalized variation penalties. Our approach, however, works with constraints, rather than penalty functionals. As a consequence, there is a natural way to choose one of the parameters of the problems and we motivate a choice rule for the second involved parameter.

Cite

CITATION STYLE

APA

Komander, B., & Lorenz, D. A. (2017). Denoising of image gradients and constrained total generalized variation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10302 LNCS, pp. 435–446). Springer Verlag. https://doi.org/10.1007/978-3-319-58771-4_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free