Finite-Time Lyapunov Exponents of Deep Neural Networks

4Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

We compute how small input perturbations affect the output of deep neural networks, exploring an analogy between deep feed-forward networks and dynamical systems, where the growth or decay of local perturbations is characterized by finite-time Lyapunov exponents. We show that the maximal exponent forms geometrical structures in input space, akin to coherent structures in dynamical systems. Ridges of large positive exponents divide input space into different regions that the network associates with different classes. These ridges visualize the geometry that deep networks construct in input space, shedding light on the fundamental mechanisms underlying their learning capabilities.

Cite

CITATION STYLE

APA

Storm, L., Linander, H., Bec, J., Gustavsson, K., & Mehlig, B. (2024). Finite-Time Lyapunov Exponents of Deep Neural Networks. Physical Review Letters, 132(5). https://doi.org/10.1103/PhysRevLett.132.057301

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free