Deep Neural Networks, Generic Universal Interpolation, and Controlled ODEs

42Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A recent paradigm views deep neural networks as discretizations of certain controlled ordinary differential equations, sometimes called neural ordinary differential equations. We make use of this perspective to link expressiveness of deep networks to the notion of controllability of dynamical systems. Using this connection, we study an expressiveness property that we call universal interpolation and show that it is generic in a certain sense. The universal interpolation property is slightly weaker than universal approximation and disentangles supervised learning on finite training sets from generalization properties. We also show that universal interpolation holds for certain deep neural networks even if large numbers of parameters are left untrained and are instead chosen randomly. This lends theoretical support to the observation that training with random initialization can be successful even when most parameters are largely unchanged through the training. Our results also explore what a minimal amount of trainable parameters in neural ordinary differential equations could be without giving up on expressiveness.

Cite

CITATION STYLE

APA

Cuchiero, C., Larsson, M., & Teichmann, J. (2020). Deep Neural Networks, Generic Universal Interpolation, and Controlled ODEs. SIAM Journal on Mathematics of Data Science, 2(3), 901–919. https://doi.org/10.1137/19M1284117

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free