In this paper we develop and analyze spiking neural network (SNN) versions of resilient propagation (RProp) and QuickProp, both training methods used to speed up training in artificial neural networks (ANNs) by making certain assumptions about the data and the error surface. Modifications are made to both algorithms to adapt them to SNNs. Results generated on standard XOR and Fisher Iris data sets using the QuickProp and RProp versions of SpikeProp are shown to converge to a final error of 0.5 -an average of 80% faster than using SpikeProp on its own.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below