Stochastic weights reinforcement learning for exploratory data analysis

3Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We review a new form of immediate reward reinforcement learning in which the individual unit is deterministic but has stochastic synapses. 4 learning rules have been developed from this perspective and we investigate the use of these learning rules to perform linear projection techniques such as principal component analysis, exploratory projection pursuit and canonical correlation analysis. The method is very general and simply requires a reward function which is specific to the function we require the unit to perform. We also discuss how the method can be used to learn kernel mappings and conclude by illustrating its use on a topology preserving mapping. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Wu, Y., Fyfe, C., & Lai, P. L. (2007). Stochastic weights reinforcement learning for exploratory data analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4668 LNCS, pp. 668–676). Springer Verlag. https://doi.org/10.1007/978-3-540-74690-4_68

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free