Abstract
Recent spectacular advances in artificial intelligence (AI), in large, be attributed to the developments in deep learning (DL). In essence, DL is not a new concept. In many respects, DL shares characteristics of “traditional” types of neural network (NN). The main distinguishing feature is that it uses many more layers in order to learn increasingly complex features. Each layer convolutes into the previous by simplifying and applying a function upon a subsection of that layer. Deep learning’s fantastic success can be attributed to dedicated researchers experimenting with many different groundbreaking techniques, but also some of its triumphs can also be attributed to fortune. It was the right technique at the right time. To function effectively, DL mainly requires two things: (a) vast amounts of training data and (b) a very specific type of computational capacity. These two respective requirements have been amply met with the growth of the Internet and the rapid development of GPUs. As such DL is an almost perfect fit for today’s technologies. However, DL is only a very rough approximation of how the brain works. More recently, spiking neural networks (SNNs) have tried to simulate biological phenomena in a more realistic way. In SNNs, information is transmitted as discreet spikes of data rather than a continuous weight or a differentiable activation function. In practical terms, this means that far more nuanced interactions can occur between neurons and that the network can run far more efficiently (e.g., in terms of calculations needed and therefore overall power requirements). Nevertheless, the big problem with SNNs is that unlike DL it does not “fit” well with existing technologies. Worst still is that no one has yet come up with definitive way to make SNNs function at a “deep” level. The difficulty is that in essence “deep” and “spiking” refer to fundamentally different characteristics of a neural network: “spiking” focuses on the activation of individual neurons, whereas “deep” concerns itself to the network architecture itself Pfeiffer and Pfeil (Front Neurosci 12, 2018) [1]. However, these two methods are in fact not contradictory, but have so far been developed in isolation from each other due to the prevailing technology driving each technique and the fundamental conceptual distance between each of the two biological paradigms. If advances in AI are to continue at the present rate, then new technologies are going to be developed and the contradictory aspects of DL and SNN are going to have to be reconciled. Very recently, there have been a handful of attempts to amalgamate DL and SNN in a variety of ways Tavanaei et al. (Neural Netw 111:47–63, 2019) [2] one of the most exciting being the creation of a specific hierarchical learning paradigm in recurrent SNN (RSNNs) called e-prop Bellec et al. (bioRxiv, 2019) [3]. However, this paper posits that this has been made problematic because a fundamental agent in the way the biological brain functions has been missing from each paradigm, and that if this is included in a new model, then the union between DL and RSNN can be made in a more harmonious manner. The missing piece to the jigsaw, in fact, is the glial cell, and the unacknowledged function it plays in neural processing. In this context, this paper examines how DL and SNN can be combined, and how glial dynamics cannot only address outstanding issues with the existing individual paradigms—for example, the “weight transport” problem but also act as the “glue”—e.g., pun intended—between these two paradigms. This idea has a direct parallel with the idea of convolution in DL but has the added dimension of time. It is important not only where events happen but also when events occur in this new paradigm. The synergy between these two powerful paradigms gives hints at the direction and potential of what could be an important part of the next wave of development in AI.
Cite
CITATION STYLE
Reid, D., & Secco, E. L. (2020). Temporal Convolution in Spiking Neural Networks: A Bio-mimetic Paradigm. In Advances in Intelligent Systems and Computing (Vol. 1139, pp. 211–222). Springer. https://doi.org/10.1007/978-981-15-3287-0_17
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.