Initial Experiments Evolving Spiking Neural Networks with Supervised Learning Capability

2Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

There is currently much research activity aimed at synaptic plasticity methods for spiking neural networks. While many methods have been proposed, there are few that provide for supervised learning. A fundamental premise of the work reported here is that the network topology is key to defining the network's capabilities: the topology IS the algorithm. Hence, learning at the level of the whole network is an emergent phenomenon of the learning mechanism operating on individual synapses and the topology. Therefore, the topology and the learning mechanism(s) must be designed together, and evolutionary computation (EC) is a suitable technology for this. We report on initial experiments on a relatively simple test problem, the tonic burster, using several types of learning including supervised. We see that EC can locate seemingly good solutions that actually do not solve the desired task; they "cheat" by simply exploiting the supervisory signals. A simple modification of the train-test protocol can solve this. We introduce an approach we call "artificial neurology" for systematically examining the behaviour of a SNN in order to understand how it achieves its performance. Experiments indicate that a combination of Hebbian and supervised learning works best for this task.

Cite

CITATION STYLE

APA

Schaffer, J. D. (2017). Initial Experiments Evolving Spiking Neural Networks with Supervised Learning Capability. In Procedia Computer Science (Vol. 114, pp. 184–191). Elsevier B.V. https://doi.org/10.1016/j.procs.2017.09.034

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free