Current artificial neural networks are very successful in many machine learning applications, but in some cases they still lag behind human abilities. To improve their performance, a natural idea is to simulate features of biological neurons which are not yet implemented in machine learning. One of such features is the fact that in biological neural networks, signals are represented by a train of spikes. Researchers have tried adding this spikiness to machine learning and indeed got very good results, especially when processing time series (and, more generally, spatio-temporal data). In this paper, we provide a possible theoretical explanation for this empirical success.
CITATION STYLE
Beer, M., Urenda, J., Kosheleva, O., & Kreinovich, V. (2020). Why Spiking Neural Networks Are Efficient: A Theorem. In Communications in Computer and Information Science (Vol. 1237 CCIS, pp. 59–69). Springer. https://doi.org/10.1007/978-3-030-50146-4_5
Mendeley helps you to discover research relevant for your work.