Why Spiking Neural Networks Are Efficient: A Theorem

4Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Current artificial neural networks are very successful in many machine learning applications, but in some cases they still lag behind human abilities. To improve their performance, a natural idea is to simulate features of biological neurons which are not yet implemented in machine learning. One of such features is the fact that in biological neural networks, signals are represented by a train of spikes. Researchers have tried adding this spikiness to machine learning and indeed got very good results, especially when processing time series (and, more generally, spatio-temporal data). In this paper, we provide a possible theoretical explanation for this empirical success.

Cite

CITATION STYLE

APA

Beer, M., Urenda, J., Kosheleva, O., & Kreinovich, V. (2020). Why Spiking Neural Networks Are Efficient: A Theorem. In Communications in Computer and Information Science (Vol. 1237 CCIS, pp. 59–69). Springer. https://doi.org/10.1007/978-3-030-50146-4_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free