Statistical guarantees for the robustness of Bayesian neural networks

35Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two. Such a measure can be used, for instance, to quantify the probability of the existence of adversarial examples. Building on statistical verification techniques for probabilistic models, we develop a framework that allows us to estimate probabilistic robustness for a BNN with statistical guarantees, i.e., with a priori error and confidence bounds. We provide experimental comparison for several approximate BNN inference techniques on image classification tasks associated to MNIST and a two-class subset of the GTSRB dataset. Our results enable quantification of uncertainty of BNN predictions in adversarial settings.

Cite

CITATION STYLE

APA

Cardelli, L., Kwiatkowska, M., Laurenti, L., Paoletti, N., Patane, A., & Wicker, M. (2019). Statistical guarantees for the robustness of Bayesian neural networks. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 5693–5700). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/789

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free