Estimating uncertainty online against an adversary

16Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.

Abstract

Assessing uncertainty is an important step towards ensuring the safety and reliability of machine learning systems. Existing uncertainty estimation techniques may fail when their modeling assumptions are not met, e.g. when the data distribution differs from the one seen at training time. Here, we propose techniques that assess a classification algorithm's uncertainty via calibrated probabilities (i.e. probabilities that match empirical outcome frequencies in the long run) and which are guaranteed to be reliable (i.e. accurate and calibrated) on out-of-distribution input, including input generated by an adversary. This represents an extension of classical online learning that handles uncertainty in addition to guaranteeing accuracy under adversarial assumptions. We establish formal guarantees for our methods, and we validate them on two real-world problems: question answering and medical diagnosis from genomic data.

Cite

CITATION STYLE

APA

Kuleshov, V., & Ermon, S. (2017). Estimating uncertainty online against an adversary. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 2110–2116). AAAI press. https://doi.org/10.1609/aaai.v31i1.10949

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free