Deep bottleneck classifiers in supervised dimension reduction

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep autoencoder networks have successfully been applied in unsupervised dimension reduction. The autoencoder has a "bottleneck" middle layer of only a few hidden units, which gives a low dimensional representation for the data when the full network is trained to minimize reconstruction error. We propose using a deep bottlenecked neural network in supervised dimension reduction. Instead of trying to reproduce the data, the network is trained to perform classification. Pretraining with restricted Boltzmann machines is combined with supervised finetuning. Finetuning with supervised cost functions has been done, but with cost functions that scale quadratically. Training a bottleneck classifier scales linearly, but still gives results comparable to or sometimes better than two earlier supervised methods. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Parviainen, E. (2010). Deep bottleneck classifiers in supervised dimension reduction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6354 LNCS, pp. 1–10). https://doi.org/10.1007/978-3-642-15825-4_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free