Peruvian Sign Language Recognition Using a Hybrid Deep Neural Network

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hearing impaired people have the ability to communicate with their hands and interpret sign language (SL), but this builds a communication gap with normal people. There are models for SL recognition that have images sequence RGB as input; however, the movements of the body in 3D space is necessary to consider due to the complexity of the gestures. We built a model for Peruvian sign language recognition (PSL) to Spanish composed by 4 phases; first, the preprocessing phase in charge to process RGB, depth and skeleton streams obtained through the Kinect sensor v.1; second, the feature extraction which learn spatial information through 3 types of convolutional neural network (CNN); third, the bidirectional long short term memory (BLSTM) with residual connections in charge to reduced and encode the information. Finally, a decoder with attention mechanism and maxout network which learn the temporal information. Our proposed model is evaluated in LSA64 and ourself-built dataset. The experimental results show significant improvement compared to other models evaluated in these dataset.

Cite

CITATION STYLE

APA

Vargas, Y. V. H., Ccasa, N. N. D., & Rodas, L. E. (2020). Peruvian Sign Language Recognition Using a Hybrid Deep Neural Network. In Communications in Computer and Information Science (Vol. 1070 CCIS, pp. 165–172). Springer. https://doi.org/10.1007/978-3-030-46140-9_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free