Skeleton-Based Sign Language Recognition with Graph Convolutional Networks on Small Data

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sign language is an important means of communication for people with speech or hearing impairments. On the other hand, it is difficult for normal people to understand sign language. Therefore, We need technology to support communication between people with speech or hearing impairments and normal people, and sign language recognition (SLR) is important to facilitate communication. In this work, we propose an approach to recognize sign language from dynamic skeletons using graph convolutional networks (GCNs). In this method, the convolution is performed by capturing the complex dynamic skeleton of sign language as graph structures. In addition, we suggest a skeleton data augmentation method, which uses MediaPipe and 3D motion data to create a new skeleton dataset for SLR from small data. We use 20 signs from Kogakuin University Japanese Sign Language Multi-Dimensional Database (KoSign) and achieve an average accuracy of 44.6% on top1 and 90.2% on top5 for two subjects.

Cite

CITATION STYLE

APA

Nakamura, Y., & Jing, L. (2022). Skeleton-Based Sign Language Recognition with Graph Convolutional Networks on Small Data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13519 LNCS, pp. 134–142). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-17618-0_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free