Unsupervised learning in LSTM recurrent neural networks

N/ACitations
Citations of this article
139Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While much work has been done on unsupervised learning in feed forward neural network architectures, its potential with (theoretically more powerful) recurrent networks and time-varying inputs has rarely been explored. Here we train Long Short-Term Memory (LSTM) recurrent networks to maximize two information-theoretic objectives for unsupervised learning: Binary Information Gain Optimization (BINGO) and Nonparametric Entropy Optimization (NEO). LSTM learns to discriminate different types of temporal sequences and group them according to a variety of features.

Cite

CITATION STYLE

APA

Klapper-Rybicka, M., Schraudolph, N. N., & Schmidhuber, J. (2001). Unsupervised learning in LSTM recurrent neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2130, pp. 684–691). Springer Verlag. https://doi.org/10.1007/3-540-44668-0_95

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free