Deep null space learning for inverse problems: Convergence analysis and rates

67Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Recently, deep learning based methods appeared as a new paradigm for solving inverse problems. These methods empirically show excellent performance but lack of theoretical justification; in particular, no results on the regularization properties are available. In particular, this is the case for two-step deep learning approaches, where a classical reconstruction method is applied to the data in a first step and a trained deep neural network is applied to improve results in a second step. In this paper, we close the gap between practice and theory for a particular network structure in a two-step approach. For that purpose, we propose using so-called null space networks and introduce the concept of φ-regularization. Combined with a standard regularization method as reconstruction layer, the proposed deep null space learning approach is shown to be a φ-regularization method; convergence rates are also derived. The proposed null space network structure naturally preserves data consistency which is considered as key property of neural networks for solving inverse problems.

Cite

CITATION STYLE

APA

Schwab, J., Antholzer, S., & Haltmeier, M. (2019). Deep null space learning for inverse problems: Convergence analysis and rates. Inverse Problems, 35(2). https://doi.org/10.1088/1361-6420/aaf14a

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free