Bayesian convolutional neural network: robustly quantify uncertainty for misclassifications detection

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For safety and mission critical systems relying on Convolutional Neural Networks (CNNs), it is crucial to avoid incorrect predictions that can cause accident or financial crisis. This can be achieved by quantifying and interpreting the predictive uncertainty. Current methods for uncertainty quantification rely on Bayesian CNNs that approximate Bayesian inference via dropout sampling. This paper investigates different dropout methods to robustly quantify the predictive uncertainty for misclassifications detection. Specifically, the following questions are addressed: In which layers should activations be sampled? Which dropout sampling mask should be used? What dropout probability should be used? How to choose the number of ensemble members? How to combine ensemble members? How to quantify the classification uncertainty? To answer these questions, experiments were conducted on three datasets using three different network architectures. Experimental results showed that the classification uncertainty is best captured by averaging the predictions of all stochastic CNNs sampled from the Bayesian CNN and by validating the predictions of the Bayesian CNN with three uncertainty measures, namely the predictive confidence, predictive entropy and standard deviation thresholds. The results showed further that the optimal dropout method specified through the sampling location, sampling mask, inference dropout probability, and number of stochastic forward passes depends on both the dataset and the designed network architecture. Notwithstanding this, I proposed to sample inputs to max pooling layers with a cascade of Multiplicative Gaussian Mask (MGM) followed by Multiplicative Bernoulli Spatial Mask (MBSM) to robustly quantify the classification uncertainty, while keeping the loss in performance low.

Cite

CITATION STYLE

APA

Njieutcheu Tassi, C. R. (2020). Bayesian convolutional neural network: robustly quantify uncertainty for misclassifications detection. In Communications in Computer and Information Science (Vol. 1144 CCIS, pp. 118–132). Springer. https://doi.org/10.1007/978-3-030-37548-5_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free