Self-supervised learning for generalizable out-of-distribution detection

182Citations
Citations of this article
130Readers
Mendeley users who have this article in their library.

Abstract

The real-world deployment of Deep Neural Networks (DNNs) in safety-critical applications such as autonomous vehicles needs to address a variety of DNNs’ vulnerabilities, one of which being detecting and rejecting out-of-distribution outliers that might result in unpredictable fatal errors. We propose a new technique relying on self-supervision for generalizable out-of-distribution (OOD) feature learning and rejecting those samples at the inference time. Our technique does not need to pre-know the distribution of targeted OOD samples and incur no extra overheads compared to other methods. We perform multiple image classification experiments and observe our technique to perform favorably against state-of-the-art OOD detection methods. Interestingly, we witness that our method also reduces in-distribution classification risk via rejecting samples near the boundaries of the training set distribution.

Cite

CITATION STYLE

APA

Mohseni, S., Pitale, M., Yadawa, J. B. S., & Wang, Z. (2020). Self-supervised learning for generalizable out-of-distribution detection. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 5216–5223). AAAI press. https://doi.org/10.1609/aaai.v34i04.5966

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free