Dynamic Vision Sensor Based Gesture Recognition Using Liquid State Machine

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Dynamic Vision Sensor (DVS) is a neuromorphic sensor. Compared with traditional cameras, event cameras based on DVS have the advantages of high time resolution, low power consumption, less motion blur, and low data redundancy, making DVS more suitable for computer vision tasks such as gesture recognition. However, the adaptation of traditional algorithms to computer vision tasks with DVS seems to have unsatisfying limitations. This paper implements a lightweight gesture recognition system using Liquid State Machine (LSM). LSM can use the events generated by DVS as input directly. To achieve better performance of LSM, we use a heuristic search algorithm to obtain an improved parameter configuration for LSM. Our system achieves 98.42% accuracy on the DVS128 Gesture Dataset. Our model contains remarkably 90% fewer parameters than the Inception 3D, which achieves a precision of 99.62% on the DVS128 Gesture Dataset. This work encodes and compresses the event stream into event frames by using Convolutional Neural Network (CNN) to extract features from the frame and then sending the extracted feature to reservoirs to implement the gesture recognition.

Cite

CITATION STYLE

APA

Xiao, X., Wang, L., Chen, X., Qu, L., Guo, S., Wang, Y., & Kang, Z. (2022). Dynamic Vision Sensor Based Gesture Recognition Using Liquid State Machine. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13531 LNCS, pp. 618–629). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-15934-3_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free