Learning Posterior Distributions in Underdetermined Inverse Problems

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, classical knowledge-driven approaches for inverse problems have been complemented by data-driven methods exploiting the power of machine and especially deep learning. Purely data-driven methods, however, come with the drawback of disregarding prior knowledge of the problem even though it has shown to be beneficial to incorporate this knowledge into the problem-solving process. We thus introduce an unpaired learning approach for learning posterior distributions of underdetermined inverse problems. It combines advantages of deep generative modeling with established ideas of knowledge-driven approaches by incorporating prior information about the inverse problem. We develop a new neural network architecture ’UnDimFlow’ (short for Unequal Dimensionality Flow) consisting of two normalizing flows, one from the data to the latent, and one from the latent to the solution space. Additionally, we incorporate the forward operator to develop an unpaired learning method for the UnDimFlow architecture and propose a tailored point estimator to recover an optimal solution during inference. We evaluate our method on the two underdetermined inverse problems of image inpainting and super-resolution.

Cite

CITATION STYLE

APA

Runkel, C., Moeller, M., Schönlieb, C. B., & Etmann, C. (2023). Learning Posterior Distributions in Underdetermined Inverse Problems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14009 LNCS, pp. 187–209). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-31975-4_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free