Bilevel Parameter Learning for Higher-Order Total Variation Regularisation Models

92Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We consider a bilevel optimisation approach for parameter learning in higher-order total variation image reconstruction models. Apart from the least squares cost functional, naturally used in bilevel learning, we propose and analyse an alternative cost based on a Huber-regularised TV seminorm. Differentiability properties of the solution operator are verified and a first-order optimality system is derived. Based on the adjoint information, a combined quasi-Newton/semismooth Newton algorithm is proposed for the numerical solution of the bilevel problems. Numerical experiments are carried out to show the suitability of our approach and the improved performance of the new cost functional. Thanks to the bilevel optimisation framework, also a detailed comparison between TGV 2 and ICTV is carried out, showing the advantages and shortcomings of both regularisers, depending on the structure of the processed images and their noise level.

Cite

CITATION STYLE

APA

De los Reyes, J. C., Schönlieb, C. B., & Valkonen, T. (2017). Bilevel Parameter Learning for Higher-Order Total Variation Regularisation Models. Journal of Mathematical Imaging and Vision, 57(1), 1–25. https://doi.org/10.1007/s10851-016-0662-8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free