The ripple pond: Enabling spiking networks to see

8Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.

Abstract

We present the biologically inspired Ripple Pond Network (RPN), a simply connected spiking neural network which performs a transformation converting two dimensional images to one dimensional temporal patterns (TP) suitable for recognition by temporal coding learning and memory networks. The RPN has been developed as a hardware solution linking previously implemented neuromorphic vision and memory structures such as frameless vision sensors and neuromorphic temporal coding spiking neural networks. Working together such systems are potentially capable of delivering end-to-end high-speed, low-power and low-resolution recognition for mobile and autonomous applications where slow, highly sophisticated and power hungry signal processing solutions are ineffective. Key aspects in the proposed approach include utilizing the spatial properties of physically embedded neural networks and propagating waves of activity therein for information processing, using dimensional collapse of imagery information into amenable TP and the use of asynchronous frames for information binding. © 2013 Afshar, Cohen, Wang, Van Schaik, Tapson, Lehmann and Hamilton.

Figures

  • FIGURE 1 | Batesian mimicry: The highly poisonous pufferfish, Canthigaster valentine (top) and its edible mimic Paraluteres prionurus (bottom). The remarkable degree of precision in the deception reveals the sophisticated recognition capabilities of the neural networks of local predatory reef fish (Caley and Schluter, 2003). These networks despite being orders of magnitude smaller than those of primates seem capable of matching human vision in performance and motivate the investigation of very simple solutions to the problem of visual recognition. (Note: the dorsal, anal and pectoral fins are virtually invisible in the animals’ natural environment).
  • FIGURE 2 | (A) Typical model of a single element in a distributed temporal coding memory network with synaptic alpha functions used as decaying synaptic kernels producing a decaying memory of recent spikes. (B) Biological representation of the same element. Through adaptation of synaptic weights and kernels a specific spatio-temporal pattern is learnt by the neuron. (C) Flipping the pattern as would
  • FIGURE 3 | The spiral RPN System Diagram: raw image to TP. (A)
  • FIGURE 4 | Two frame generation approaches: (A) A periodic enable signal projects new frames on to the RPN, (B) the inhibitive neuron is connected to all neurons on the disc. As the disc activation collapses inward along the arms and leaves the disc via the summing neuron, the total activation reaching the inhibitive neuron also falls. Once the disc activation reaches zero, the path of the input image is unblocked allowing
  • FIGURE 5 | (A) Generating uniform global and local neuron density in a radially symmetric distribution via an adaptive algorithm that varies the angular of new neurons βn such distance to the nearest neighbor is maximized results in a spiral structured disc. The disc is shown with ( = 8, N = 4). (B) Spiral propagating waves of neural activity on the chicken retina due to excitation. Image from (Yu et al., 2012). (C) The spiral structure at larger scales RPN disc with ( = 8, N = 128).
  • FIGURE 6 | Time Warp invariant memory network and the RPN’s scale invariance: (A) the memory network learns a particular spatiotemporal pattern (B) The memory network recognizes a time warped version of the learnt pattern (C) The RPN system generates a spatio-temporal pattern from the projected image of reef fish via simple color based
  • FIGURE 7 | Standard orientation sensitive feature extractors cannot precede the RPN: (A) Feature extraction via Cartesian Gabor filters, G(α), group features into feature maps based on their orientation relative to the Cartesian coordinate system. The feature maps are input to the RPN, however, the rotational
  • FIGURE 8 | RPN producing a spatiotemporal pattern using parallel radial Gabor filters and discs with varying neuron densities. The incident image is processed by a filter bank of radial Gabor filters generating multiple feature maps (3 shown) each of these is then projected onto three discs with N (full), N/2 (half), and N/4 (quarter) density. The lower density discs simply generate an earlier low resolution version of the full TP which can be used by the memory for normalization or early categorization. The nine TPs shown illustrate the multiplicative effect of feature extractors when combined in a fan-out fashion.

References Powered by Scopus

A model of saliency-based visual attention for rapid scene analysis

10138Citations
1171Readers
Get full text
Get full text
3902Citations
3054Readers
Get full text

Cited by Powered by Scopus

DART: Distribution aware retinal transform for event-based cameras

71Citations
79Readers
Get full text
18Citations
53Readers
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Afshar, S., Cohen, G. K., Wang, R. M., Van Schaik, A., Tapson, J., Lehmann, T., & Hamilton, T. J. (2013). The ripple pond: Enabling spiking networks to see. Frontiers in Neuroscience, (7 NOV). https://doi.org/10.3389/fnins.2013.00212

Readers over time

‘13‘14‘15‘16‘17‘18‘19‘20‘21036912

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 21

54%

Researcher 11

28%

Professor / Associate Prof. 7

18%

Readers' Discipline

Tooltip

Computer Science 15

43%

Engineering 13

37%

Neuroscience 4

11%

Agricultural and Biological Sciences 3

9%

Save time finding and organizing research with Mendeley

Sign up for free
0