Deep neural networks (DNNs) have been shown lack ofrobustness, as they are vulnerable to small perturbations on theinputs. This has led to safety concerns on applying DNNs tosafety-critical domains. Several verification approaches based onconstraint solving have been developed to automatically prove ordisprove safety properties for DNNs. However, these approachessuffer from the scalability problem, i.e., only small DNNs can behandled. To deal with this, abstraction based approaches have beenproposed, but are unfortunately facing the precision problem, i.e.,the obtained bounds are often loose. In this paper, we focus on avariety of local robustness properties and a(δ, ε) -global robustness property of DNNs, andinvestigate novel strategies to combine the constraint solving andabstraction-based approaches to work with these properties:We propose a method to verify local robustness, which improvesa recent proposal of analyzing DNNs through the classic abstractinterpretation technique, by a novel symbolic propagation technique.Specifically, the values of neurons are representedsymbolically and propagated from the input layer to theoutput layer, on top of the underlying abstract domains. It achievessignificantly higher precision and thus can prove more properties.We propose a Lipschitz constant based verification framework.By utilising Lipschitz constants solved by semidefinite programming,we can prove global robustness of DNNs. We show how the Lipschitzconstant can be tightened if it is restricted to small regions. Atightened Lipschitz constantcan be helpful in proving localrobustness properties. Furthermore, a global Lipschitz constant canbe used to accelerate batch local robustness verification, and thussupport the verification of global robustness.We show how the proposed abstract interpretation andLipschitz constant based approaches can benefit from each other toobtain more precise results. Moreover, they can be also exploitedand combined to improve constraints based approach.We implement our methods in the tool PRODeep, and conduct detailedexperimental results on several benchmarks
CITATION STYLE
Yang, P., Li, J., Liu, J., Huang, C. C., Li, R., Chen, L., … Zhang, L. (2021). Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation. Formal Aspects of Computing, 33(3), 407–435. https://doi.org/10.1007/s00165-021-00548-1
Mendeley helps you to discover research relevant for your work.