American sign language video hand gestures recognition using deep neural networks

ISSN: 22498958
1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

In this paper an effort has been placed to translate/recognize some of the video based hand gestures of American Sign Language (ASL) into human and/or machine readable English text using deep neural networks. Initially, the recognition process is carried out by fetching the input video gestures. In the recognition process of the proposed algorithm, for background elimination and foreground detection, the Gaussian Mixture Model (GMM) is used. The basic preprocessing operations are used for better segmentation of the video gestures. The various feature extraction techniques like, Speeded Up Robust Features (SURF), Zernike Moment (ZM), Discrete Cosine Transform (DCT), Radon Features (RF), and R, G, B levels are used to extract the hand features from frames of the video gestures. The extracted video hand gesture features are used for classification and recognition process in forthcoming stage. For classification and followed by recognition, the Deep Neural Networks (stacked autoencoder) is used. This video hand gesture recognition system can be used as tool for filling the communication gap between the normal and hearing impaired people. As a result of this proposed ASL video hand gesture recognition (VHGR), an average recognition rate of 96.43% is achieved. This is the better and motivational performance compared to state of art techniques.

Cite

CITATION STYLE

APA

Shivashankara, S., & Srinath, S. (2019). American sign language video hand gestures recognition using deep neural networks. International Journal of Engineering and Advanced Technology, 8(5), 2742–2751.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free