A study on the similarities of Deep Belief Networks and Stacked Autoencoders

  • de Giorgio A
N/ACitations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Restricted Boltzmann Machines (RBMs) and autoencoders have been used - in several variants - for similar tasks, such as reducing dimensionality or extracting features from signals. Even though their structures are quite similar, they rely on different training theories. Lately, they have been largely used as building blocks in deep learning architectures that are called deep belief networks (instead of stacked RBMs) and stacked autoencoders. In light of this, the student has worked on this thesis with the aim to understand the extent of the similarities and the overall pros and cons of using either RBMs, autoencoders or denoising autoencoders in deep networks. Important characteristics are tested, such as the robustness to noise, the influence on training of the availability of data and the tendency to overtrain. The author has then dedicated part of the thesis to study how the three deep networks in exam form their deep internal representations and how similar these can be to each other. In result of this, a novel approach for the evaluation of internal representations is presented with the name of F-Mapping. Results are reported and discussed.

Cite

CITATION STYLE

APA

de Giorgio, A. (2015). A study on the similarities of Deep Belief Networks and Stacked Autoencoders. KTH Royal Institute of Technology. https://doi.org/10.13140/RG.2.1.3502.8883

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free