Learning transformed prototypes (LTP) - A statistical pattern classification technique of neural networks

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A statistical pattern recognition algorithm called learning transformed prototypes (LTP) is developed for probabilistic RAM (pRAM) neural networks, With LTP the pRAM net learns to map statistically the input sets to the output prototypes, or codebook vectors, in the binary domain. The method allows the pRAM net to self-organise the codebook vectors in the output space of arbitrary dimension. The similarities and differences of LTP with those algorithms such as LVQ (learning vector quantisation), SOFM (self-organised feature maps) and pRAM reinforcement learning are discussed. The training data processed in the method is the input-output spike series of the neural net, therefore the technique can be built into a hardware system with the currently available pRAM learning chips.

Cite

CITATION STYLE

APA

Guan, Y., Clarkson, T. G., & Taylor, J. G. (1995). Learning transformed prototypes (LTP) - A statistical pattern classification technique of neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 930, pp. 441–447). Springer Verlag. https://doi.org/10.1007/3-540-59497-3_207

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free