Theoretical Characterization of Deep Neural Networks

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep neural networks are poorly understood mathematically, however there has been a lot of recent work focusing on analyzing and understanding their success in a variety of pattern recognition tasks. We describe some of the mathematical techniques used for characterization of neural networks in terms of complexity of classification or regression task assigned, or based on functions learned, and try to relate this to architecture choices for neural networks. We explain some of the measurable quantifiers that can been used for defining expressivity of neural network including using homological complexity and curvature. We also describe neural networks from the viewpoints of scattering transforms and share some of the mathematical and intuitive justifications for those. We finally share a technique for visualizing and analyzing neural networks based on concept of Riemann curvature.

Cite

CITATION STYLE

APA

Kaul, P., & Lall, B. (2020). Theoretical Characterization of Deep Neural Networks. In Studies in Computational Intelligence (Vol. 866, pp. 25–63). Springer Verlag. https://doi.org/10.1007/978-3-030-31756-0_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free