Sparsity enables data and energy efficient spiking convolutional neural networks

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent days, deep learning has surpassed human performance in image recognition tasks. A major issue with deep learning systems is their reliance on large datasets for optimal performance. When presented with a new task, generalizing from low amounts of data becomes highly attractive. Research has shown that human visual cortex might employ sparse coding to extract features from the images that we see, leading to efficient usage of available data. To ensure good generalization and energy efficiency, we create a multi-layer spiking convolutional neural network which performs layer-wise sparse coding for unsupervised feature extraction. It is applied on MNIST dataset where it achieves 92.3% accuracy with just 500 data samples, which is 4 × less than what vanilla CNNs need for similar values, while reaching 98.1% accuracy with full dataset. Only around 7000 spikes are used per image (6 × reduction in transferred bits per forward pass compared to CNNs) implying high sparsity. Thus, we show that our algorithm ensures better sparsity, leading to improved data and energy efficiency in learning, which is essential for some real-world applications.

Cite

CITATION STYLE

APA

Bhatt, V., & Ganguly, U. (2018). Sparsity enables data and energy efficient spiking convolutional neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11139 LNCS, pp. 263–272). Springer Verlag. https://doi.org/10.1007/978-3-030-01418-6_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free