Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration

27Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

To facilitate a wide-spread acceptance of AI systems guiding decision making in real-world applications, trustworthiness of deployed models is key. That is, it is crucial for predictive models to be uncertainty-aware and yield well-calibrated (and thus trustworthy) predictions for both in-domain samples as well as under domain shift. Recent efforts to account for predictive uncertainty include post-processing steps for trained neural networks, Bayesian neural networks as well as alternative non-Bayesian approaches such as ensemble approaches and evidential deep learning. Here, we propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift. We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions for a wide range of domain drifts. We comprehensively evaluate previously proposed approaches on different data modalities, a large range of data sets including sequence data, network architectures and perturbation strategies. We observe that our modelling approach substantially outperforms existing state-of-the-art approaches, yielding well-calibrated predictions under domain drift.

Cite

CITATION STYLE

APA

Tomani, C., & Buettner, F. (2021). Towards Trustworthy Predictions from Deep Neural Networks with Fast Adversarial Calibration. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 11B, pp. 9886–9896). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i11.17188

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free