Denoising Autoencoders for Overgeneralization in Neural Networks

16Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite recent developments that allowed neural networks to achieve impressive performance on a variety of applications, these models are intrinsically affected by the problem of overgeneralization, due to their partitioning of the full input space into the fixed set of target classes used during training. Thus it is possible for novel inputs belonging to categories unknown during training or even completely unrecognizable to humans to fool the system into classifying them as one of the known classes, even with a high degree of confidence. This problem can lead to security problems in critical applications, and is closely linked to open set recognition and 1-class recognition. This paper presents a novel way to compute a confidence score using the reconstruction error of denoising autoencoders and shows how it can correctly identify the regions of the input space close to the training distribution. The proposed solution is tested on benchmarks of 'fooling', open set recognition and 1-class recognition constructed from the MNIST and Fashion-MNIST datasets.

Cite

CITATION STYLE

APA

Spigler, G. (2020). Denoising Autoencoders for Overgeneralization in Neural Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4), 998–1004. https://doi.org/10.1109/TPAMI.2019.2909876

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free