Method for multimodal recognition of one-handed sign language gestures through 3D convolution and LSTM neural networks

7Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The paper presents an approach to the multimodal recognition of dynamic and static gestures of Russian sign language through 3D convolutional and LSTM neural networks. A set of data in color format and a depth map, consisting of 48 one-handed gestures of Russian sign language, is presented as well. The set of data was obtained with the use of the Kinect sensor v2 and contains records of 13 different native signers of Russian sign language. The obtained results are compared with these of other methods. The experiment on classification showed a great potential of neural networks in solving this problem. Achieved recognition accuracy was of 73.25%, and, compared to other approaches to the problem, this turns out to be the best result.

Cite

CITATION STYLE

APA

Kagirov, I., Ryumin, D., & Axyonov, A. (2019). Method for multimodal recognition of one-handed sign language gestures through 3D convolution and LSTM neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11658 LNAI, pp. 191–200). Springer Verlag. https://doi.org/10.1007/978-3-030-26061-3_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free