Understanding the Effect of Bias in Deep Anomaly Detection

7Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Anomaly detection presents a unique challenge in machine learning, due to the scarcity of labeled anomaly data. Recent work attempts to mitigate such problems by augmenting training of deep anomaly detection models with additional labeled anomaly samples. However, the labeled data often does not align with the target distribution and introduces harmful bias to the trained model. In this paper, we aim to understand the effect of a biased anomaly set on anomaly detection. Concretely, we view anomaly detection as a supervised learning task where the objective is to optimize the recall at a given false positive rate. We formally study the relative scoring bias of an anomaly detector, defined as the difference in performance with respect to a baseline anomaly detector. We establish the first finite sample rates for estimating the relative scoring bias for deep anomaly detection, and empirically validate our theoretical results on both synthetic and real-world datasets. We also provide an extensive empirical study on how a biased training anomaly set affects the anomaly score function and therefore the detection performance on different anomaly classes. Our study demonstrates scenarios in which the biased anomaly set can be useful or problematic, and provides a solid benchmark for future research.

Cite

CITATION STYLE

APA

Ye, Z., Chen, Y., & Zheng, H. (2021). Understanding the Effect of Bias in Deep Anomaly Detection. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3314–3320). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/456

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free