Stochastic Search Algorithms for Identification, Optimization, and Training of Artificial Neural Networks

  • Nikolic K
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

This paper presents certain stochastic search algorithms (SSA) suitable for effective identification, optimization, and training of artificial neural networks (ANN). The modified algorithm of nonlinear stochastic search (MN-SDS) has been introduced by the author. Its basic objectives are to improve convergence property of the source defined nonlinear stochastic search (N-SDS) method as per Professor Rastrigin. Having in mind vast range of possible algorithms and procedures a so-called method of stochastic direct search (SDS) has been practiced (in the literature is called stochastic local search-SLS). The MN-SDS convergence property is rather advancing over N-SDS; namely it has even better convergence over range of gradient procedures of optimization. The SDS, that is, SLS, has not been practiced enough in the process of identification, optimization, and training of ANN. Their efficiency in some cases of pure nonlinear systems makes them suitable for optimization and training of ANN. The presented examples illustrate only partially operatively end efficiency of SDS, that is, MN-SDS. For comparative method backpropagation error (BPE) method was used.

Cite

CITATION STYLE

APA

Nikolic, K. P. (2015). Stochastic Search Algorithms for Identification, Optimization, and Training of Artificial Neural Networks. Advances in Artificial Neural Systems, 2015, 1–16. https://doi.org/10.1155/2015/931379

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free