On the Verification of Neural ODEs with Stochastic Guarantees

27Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

We show that Neural ODEs, an emerging class of time-continuous neural networks, can be verified by solving a set of global-optimization problems. For this purpose, we introduce Stochastic Lagrangian Reachability (SLR), an abstraction-based technique for constructing a tight Reachtube (an over-approximation of the set of reachable states over a given time-horizon), and provide stochastic guarantees in the form of confidence intervals for the Reachtube bounds. SLR inherently avoids the infamous wrapping effect (accumulation of over-approximation errors) by performing local optimization steps to expand safe regions instead of repeatedly forward-propagating them as is done by deterministic reachability methods. To enable fast local optimizations, we introduce a novel forward-mode adjoint sensitivity method to compute gradients without the need for backpropagation. Finally, we establish asymptotic and non-asymptotic convergence rates for SLR.

Cite

CITATION STYLE

APA

Gruenbacher, S., Hasani, R., Lechner, M., Cyranka, J., Smolka, S. A., & Grosu, R. (2021). On the Verification of Neural ODEs with Stochastic Guarantees. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 13A, pp. 11525–11535). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i13.17372

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free