Deep dictionary learning vs deep belief network vs stacked autoencoder: An empirical analysis

15Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A recent work introduced the concept of deep dictionary learning. The first level is a dictionary learning stage where the inputs are the training data and the outputs are the dictionary and learned coefficients. In subsequent levels of deep dictionary learning, the learned coefficients from the previous level acts as inputs. This is an unsupervised representation learning technique. In this work we empirically compare and contrast with similar deep representation learning techniques–deep belief network and stacked autoencoder. We delve into two aspects; the first one is the robustness of the learning tool in the presence of noise and the second one is the robustness with respect to variations in the number of training samples. The experiments have been carried out on several benchmark datasets. We find that the deep dictionary learning method is the most robust.

Cite

CITATION STYLE

APA

Singhal, V., Gogna, A., & Majumdar, A. (2016). Deep dictionary learning vs deep belief network vs stacked autoencoder: An empirical analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9950 LNCS, pp. 337–344). Springer Verlag. https://doi.org/10.1007/978-3-319-46681-1_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free