From Imbalanced Classification to Supervised Outlier Detection Problems: Adversarially Trained Auto Encoders

0Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Imbalanced datasets pose severe challenges in training well performing classifiers. This problem is also prevalent in the domain of outlier detection since outliers occur infrequently and are generally treated as minorities. One simple yet powerful approach is to use autoencoders which are trained on majority samples and then to classify samples based on the reconstruction loss. However, this approach fails to classify samples whenever reconstruction errors of minorities overlap with that of majorities. To overcome this limitation, we propose an adversarial loss function that maximizes the loss of minorities while minimizing the loss for majorities. This way, we obtain a well-separated reconstruction error distribution that facilitates classification. We show that this approach is robust in a wide variety of settings, such as imbalanced data classification or outlier- and novelty detection.

Cite

CITATION STYLE

APA

Lübbering, M., Ramamurthy, R., Gebauer, M., Bell, T., Sifa, R., & Bauckhage, C. (2020). From Imbalanced Classification to Supervised Outlier Detection Problems: Adversarially Trained Auto Encoders. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12396 LNCS, pp. 27–38). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-61609-0_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free