Fastened CROWN: Tightened neural network robustness certificates

32Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

The rapid growth of deep learning applications in real life is accompanied by severe safety concerns. To mitigate this uneasy phenomenon, much research has been done providing reliable evaluations of the fragility level in different deep neural networks. Apart from devising adversarial attacks, quantifiers that certify safeguarded regions have also been designed in the past five years. The summarizing work in (Salman et al. 2019) unifies a family of existing verifiers under a convex relaxation framework. We draw inspiration from such work and further demonstrate the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints. Given this theoretical result, the computationally expensive linear programming based method is shown to be unnecessary. We then propose an optimization-based approach FROWN (Fastened CROWN): a general algorithm to tighten robustness certificates for neural networks. Extensive experiments on various networks trained individually verify the effectiveness of FROWN in safeguarding larger robust regions.

Cite

CITATION STYLE

APA

Lyu, Z., Ko, C. Y., Kong, Z., Wong, N., Lin, D., & Daniel, L. (2020). Fastened CROWN: Tightened neural network robustness certificates. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 5037–5044). AAAI press. https://doi.org/10.1609/aaai.v34i04.5944

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free