Learning deep belief networks from non-stationary streams

32Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep learning has proven to be beneficial for complex tasks such as classifying images. However, this approach has been mostly applied to static datasets. The analysis of non-stationary (e.g., concept drift) streams of data involves specific issues connected with the temporal and changing nature of the data. In this paper, we propose a proof-of-concept method, called Adaptive Deep Belief Networks, of how deep learning can be generalized to learn online from changing streams of data. We do so by exploiting the generative properties of the model to incrementally re-train the Deep Belief Network whenever new data are collected. This approach eliminates the need to store past observations and, therefore, requires only constant memory consumption. Hence, our approach can be valuable for life-long learning from non-stationary data streams. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Calandra, R., Raiko, T., Deisenroth, M. P., & Pouzols, F. M. (2012). Learning deep belief networks from non-stationary streams. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7553 LNCS, pp. 379–386). https://doi.org/10.1007/978-3-642-33266-1_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free