We propose a deep neural network architecture and associated loss functions for a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary diFFerential equations. Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with Fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality. We show that nonlinear systems satisfying a small-gain condition admit compositional Lyapunov functions. Numerical examples in up to ten space dimensions illustrate the performance of the training scheme.
CITATION STYLE
Grune, L. (2021). Computing Lyapunov Functions Using Deep Neural Networks. Journal of Computational Dynamics, 8(2), 131–152. https://doi.org/10.3934/jcd.2021006
Mendeley helps you to discover research relevant for your work.