Unsupervised Anomaly Detection Using Style Distillation

25Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Autoencoders (AEs) have been widely used for unsupervised anomaly detection. They learn from normal samples such that they produce high reconstruction errors for anomalous samples. However, AEs can exhibit the over-detection issue because they imperfectly reconstruct not only anomalous samples but also normal ones. To address this issue, we introduce an outlier-exposed style distillation network (OE-SDN) that mimics the mild distortions caused by an AE, which are termed as style translation. We use the difference between the outputs of the OE-SDN and AE as an alternative anomaly score. Experiments on anomaly classification and segmentation tasks show that the performance of our method is superior to existing methods.

Cite

CITATION STYLE

APA

Chung, H., Park, J., Keum, J., Ki, H., & Kang, S. (2020). Unsupervised Anomaly Detection Using Style Distillation. IEEE Access, 8, 221494–221502. https://doi.org/10.1109/ACCESS.2020.3043473

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free