An Ensembled Scale-Space Model of Deep Convolutional Neural Networks for Sign Language Recognition

5Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A sign language translator is a utilitarian in facilitating communication between the deaf community and the hearing majority. This paper proffers an innovative specialized convolutional neural network (CNN) model, Sign-Net, to recognize hand gesture signs by incorporating scale-space theory to deep learning framework. The proposed model is an ensemble of CNNs—a low resolution network (LRN) and a high resolution network (HRN). This architecture of the proposed model allows the ensemble to work at different spatial resolutions and at varying depths of CNN. The Sign-Net model was assessed with static signs of American Sign Language—alphabets and digits. Since there exists no sign dataset for deep learning, the ensemble performance is evaluated on the synthetic dataset which we have collected for this task. Assessment of the synthetic dataset by Sign-Net reported an impressive accuracy of 74.5%, notably superior to the other existing models.

Cite

CITATION STYLE

APA

Aloysius, N., & Geetha, M. (2021). An Ensembled Scale-Space Model of Deep Convolutional Neural Networks for Sign Language Recognition. In Advances in Intelligent Systems and Computing (Vol. 1133, pp. 363–375). Springer. https://doi.org/10.1007/978-981-15-3514-7_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free