Fast Modifications of the SpikeProp Algorithm

  • McKennoch S
  • Liu D
  • Bushnell L
  • 39


    Mendeley users who have this article in their library.
  • 58


    Citations of this article.


In this paper we develop and analyze spiking neural network (SNN) versions of resilient propagation (RProp) and QuickProp, both training methods used to speed up training in artificial neural networks (ANNs) by making certain assumptions about the data and the error surface. Modifications are made to both algorithms to adapt them to SNNs. Results generated on standard XOR and Fisher Iris data sets using the QuickProp and RProp versions of SpikeProp are shown to converge to a final error of 0.5 -an average of 80% faster than using SpikeProp on its own.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


  • S. McKennoch

  • Dingding Liu Dingding Liu

  • L.G. Bushnell

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free