A dynamic growing neural network for supervised or unsupervised learning

9Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A Dynamic Growing Neural Network (DGNN) for supervised learning of pattern recognition or unsupervised learning of clustering is presented. The main ideas included in DGNN are Growing, Resonance, and Post-prune. DGNN is called Dynamic Growing because it is based on the Hebbian learning rule and adds new neurons under certain conditions. When DGNN performs supervised learning, resonance will happen if the winner can't match the training example; this rule combines the ART/ ARTMAP neural network and WTA learning rule. When DGNN performs unsupervised learning, post-prune is carried out to prevent over fitting the training data just like decision tree learning. DGNN's prune rule is based on the distance threshold. DGNN has some advantages: learning not only is stable because it grows under certain conditions; but also it is faster than back-propagation rules and favorable learned predictive accuracy in small, noisy, online or offline data sets. Three classes of simulations are performed on the primary benchmarks: Circle-in-the-Square and Two-spirals-apart benchmarks are used to check DGNN's supervised learning and compare it with ARTMAP and BP neural networks; DGNN's unsupervised learning ability is checked on UCI Machine Learning Archive's Synthetic Control Chart Time Series data set. ©2006 IEEE.

Cite

CITATION STYLE

APA

Tian, D., Liu, Y., & Wei, D. (2006). A dynamic growing neural network for supervised or unsupervised learning. In Proceedings of the World Congress on Intelligent Control and Automation (WCICA) (Vol. 1, pp. 2886–2890). https://doi.org/10.1109/WCICA.2006.1712893

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free