Sign Language Recognition Based on Residual Network

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For deaf people, sign language is an important way to communicate with the world, but few non-deaf people understand sign language, so communication is a huge problem between deaf people and non-deaf people. With the development of deep learning, CNN-LSTM network is widely used to recognize sign language recognition by extracting the temporal and spatial information from video. However, the CNN-LSTM network has the problem of overfitting which limits the network generalization ability. To solve the question, this paper studies an improved CNN-LSTM network (Sh-Res-LSTM). The network uses the Residual Network which is made of the improved residual module to obtain the spatial features from sign language video, and then use LSTM network to obtain the temporal features from the feature sequence obtained by the Residual Network, we use the loss function with label smoothing in the training process, and also compare the effect of four different residual blocks in the CNN-LSTM network structure. The experiments show that our design increases the network generalization ability and gets a recognition rate of 97% in test dataset.

Cite

CITATION STYLE

APA

Li, X., Zhao, Q., Song, S., & Shen, T. (2022). Sign Language Recognition Based on Residual Network. In Lecture Notes in Electrical Engineering (Vol. 961 LNEE, pp. 1240–1249). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-19-6901-0_130

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free