Scoring and Classifying with Gated Auto-Encoders

  • Im D
  • Taylor G
N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Auto-encoders are perhaps the best-known non-probabilistic methods for representation learning. They are conceptually simple and easy to train. Recent theoretical work has shed light on their ability to capture manifold structure, and drawn connections to density modeling. This has motivated researchers to seek ways of auto-encoder scoring, which has furthered their use in classification. Gated auto-encoders (GAEs) are an interesting and flexible extension of auto-encoders which can learn transformations among different images or pixel covariances within images. However, they have been much less studied, theoretically or empirically. In this work, we apply a dynamical systems view to GAEs, deriving a scoring function, and drawing connections to Restricted Boltzmann Machines. On a set of deep learning benchmarks, we also demonstrate their effectiveness for single and multi-label classification.

Cite

CITATION STYLE

APA

Im, D. J., & Taylor, G. W. (2015). Scoring and Classifying with Gated Auto-Encoders (pp. 533–545). https://doi.org/10.1007/978-3-319-23528-8_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free