Unsupervised Outlier Detection via Transformation Invariant Autoencoder

17Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Autoencoder based methods are the majority of deep unsupervised outlier detection methods. However, these methods perform not well on complex image datasets and suffer from the noise introduced by outliers, especially when the outlier ratio is high. In this paper, we propose a framework named Transformation Invariant AutoEncoder (TIAE), which can achieve stable and high performance on unsupervised outlier detection. First, instead of using a conventional autoencoder, we propose a transformation invariant autoencoder to do better representation learning for complex image datasets. Next, to mitigate the negative effect of noise introduced by outliers and stabilize the network training, we select the most confident inliers likely examples in each epoch as the training set by incorporating adaptive self-paced learning in our TIAE framework. Extensive evaluations show that TIAE significantly advances unsupervised outlier detection performance by up to 10% AUROC against other autoencoder based methods on five image datasets.

Cite

CITATION STYLE

APA

Cheng, Z., Zhu, E., Wang, S., Zhang, P., & Li, W. (2021). Unsupervised Outlier Detection via Transformation Invariant Autoencoder. IEEE Access, 9, 43991–44002. https://doi.org/10.1109/ACCESS.2021.3065838

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free