Computing Lyapunov Functions Using Deep Neural Networks

20Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

We propose a deep neural network architecture and associated loss functions for a training algorithm for computing approximate Lyapunov functions of systems of nonlinear ordinary diFFerential equations. Under the assumption that the system admits a compositional Lyapunov function, we prove that the number of neurons needed for an approximation of a Lyapunov function with Fixed accuracy grows only polynomially in the state dimension, i.e., the proposed approach is able to overcome the curse of dimensionality. We show that nonlinear systems satisfying a small-gain condition admit compositional Lyapunov functions. Numerical examples in up to ten space dimensions illustrate the performance of the training scheme.

Cite

CITATION STYLE

APA

Grune, L. (2021). Computing Lyapunov Functions Using Deep Neural Networks. Journal of Computational Dynamics, 8(2), 131–152. https://doi.org/10.3934/jcd.2021006

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free