Optimization analysis of dynamic sample number and hidden layer node number based on BP neural network

12Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Taking use of BP neural network theory, a cyclic search method between dynamic training samples number and dynamic hidden nodes number is proposed to improve the prediction accuracy of network model. By increasing the training samples one by one to control the network training, searching dynamic sample error expectation matrix, and finally getting the best training sample number, with the goal of minimum error expectations. Thus, through the optimization model of hidden nodes number, searching the number of hidden layer nodes of network minimum output error. The results of the analysis of examples shows that, with the increasing number of training samples, the network output error expectation experienced three stages, namely recurrent big error stage, yield decline error stage and stable small error stage. But with the increasing number of hidden layer nodes, the result is on the contrary. This shows that proper number of training samples and hidden layer nodes is of great significance to improving the output precision of neural network. © Springer-Verlag Berlin Heidelberg 2013.

Cite

CITATION STYLE

APA

Xu, C., & Xu, C. (2013). Optimization analysis of dynamic sample number and hidden layer node number based on BP neural network. Advances in Intelligent Systems and Computing, 212, 687–695. https://doi.org/10.1007/978-3-642-37502-6_82

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free