Static hand gesture recognition for American sign language using neuromorphic hardware

17Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we develop four spiking neural network (SNN) models for two static American sign language (ASL) hand gesture classification tasks, i.e., the ASL alphabet and ASL digits. The SNN models are deployed on Intel’s neuromorphic platform, Loihi, and then compared against equivalent deep neural network (DNN) models deployed on an edge computing device, the Intel neural compute stick 2 (NCS2). We perform a comprehensive comparison between the two systems in terms of accuracy, latency, power consumption, and energy. The best DNN model achieves an accuracy of 99.93% on the ASL alphabet dataset, whereas the best performing SNN model has an accuracy of 99.30%. For the ASL-digits dataset, the best DNN model achieves an accuracy of 99.76% accuracy while the SNN achieves 99.03%. Moreover, our obtained experimental results show that the Loihi neuromorphic hardware implementations achieve up to 20.64× and 4.10× reduction in power consumption and energy, respectively, when compared to NCS2.

Cite

CITATION STYLE

APA

Mohammadi, M., Chandarana, P., Seekings, J., Hendrix, S., & Zand, R. (2022). Static hand gesture recognition for American sign language using neuromorphic hardware. Neuromorphic Computing and Engineering, 2(4). https://doi.org/10.1088/2634-4386/ac94f3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free