Variational networks: Connecting variational methods and deep learning

72Citations
Citations of this article
102Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we introduce variational networks (VNs) for image reconstruction. VNs are fully learned models based on the framework of incremental proximal gradient methods. They provide a natural transition between classical variational methods and state-of-the-art residual neural networks. Due to their incremental nature, VNs are very efficient, but only approximately minimize the underlying variational model. Surprisingly, in our numerical experiments on image reconstruction problems it turns out that giving up exact minimization leads to a consistent performance increase, in particular in the case of convex models.

Cite

CITATION STYLE

APA

Kobler, E., Klatzer, T., Hammernik, K., & Pock, T. (2017). Variational networks: Connecting variational methods and deep learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10496 LNCS, pp. 281–293). Springer Verlag. https://doi.org/10.1007/978-3-319-66709-6_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free