Optimization and abstraction: A synergistic approach for analyzing neural network robustness

N/ACitations
Citations of this article
72Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. Intuitively, robustness means that small perturbations to an input do not cause the network to perform misclas-sifications. In this paper, we present a novel algorithm for verifying robustness properties of neural networks. Our method synergistically combines gradient-based optimization methods for counterexample search with abstraction-based proof search to obtain a sound and (δ-)complete decision procedure. Our method also employs a data-driven approach to learn a verification policy that guides abstract interpretation during proof search. We have implemented the proposed approach in a tool called Charon and experimentally evaluated it on hundreds of benchmarks. Our experiments show that the proposed approach significantly outperforms three state-of-the-art tools, namely AI2, Reluplex, and Reluval.

Cite

CITATION STYLE

APA

Anderson, G., Dillig, I., Pailoor, S., & Chaudhuri, S. (2019). Optimization and abstraction: A synergistic approach for analyzing neural network robustness. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) (pp. 731–744). Association for Computing Machinery. https://doi.org/10.1145/3314221.3314614

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free