CADET: Calibrated Anomaly Detection for Mitigating Hardness Bias

N/ACitations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

The detection of anomalous samples in large, high-dimensional datasets is a challenging task with numerous practical applications. Recently, state-of-the-art performance is achieved with deep learning methods: for example, using the reconstruction error from an autoencoder as anomaly scores. However, the scores are uncalibrated: that is, they follow an unknown distribution and lack a clear interpretation. Furthermore, the reconstruction error is highly influenced by the 'hardness' of a given sample, which leads to false negative and false positive errors. In this paper, we empirically show the significance of this hardness bias present in a range of recent deep anomaly detection methods. To mitigate this, we propose an efficient and plug-and-play error calibration method which mitigates this hardness bias in the anomaly scoring without the need to retrain the model. We verify the effectiveness of our method on a range of image, time-series, and tabular datasets and against several baseline methods.

Cite

CITATION STYLE

APA

Deng, A., Goodge, A., Ang, L. Y., & Hooi, B. (2022). CADET: Calibrated Anomaly Detection for Mitigating Hardness Bias. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2002–2008). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/278

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free