Learning to recognize actions from limited training examples using a recurrent spiking neural model

21Citations
Citations of this article
79Readers
Mendeley users who have this article in their library.

Abstract

A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models.

Cite

CITATION STYLE

APA

Panda, P., & Srinivasa, N. (2018). Learning to recognize actions from limited training examples using a recurrent spiking neural model. Frontiers in Neuroscience, 12(MAR). https://doi.org/10.3389/fnins.2018.00126

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free