Hand Gesture Recognition using Deep Learning

  • et al.
N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Hearing impaired individuals use sign languages to communicate with others within the community. Because of the wide spread use of this language, hard-of-hearing individuals can easily understand it but it is not known by a lot of normal people. In this paper a hand gesture recognition system has been developed to overcome this problem, for those who don't recognize sign language to communicate simply with hard-of-hearing individuals. In this paper a computer vision-based system is designed to detect sign Language. Datasets used in this paper are binary images. These images are given to the convolution neural network (CNN). This model extracts the features of the image and classifies the images, and it recognises the gestures. The gestures used in this paper are of American Sign Language. In real time system the images are converted to binary images using Hue, Saturation, and Value (HSV) colour model. In this model 87.5% of data is used for training and 12.5% of data is used for testing and the accuracy obtained with this model is 97%.

Cite

CITATION STYLE

APA

Devi, A. G. … Nath, R. M. (2020). Hand Gesture Recognition using Deep Learning. International Journal of Engineering and Advanced Technology, 9(4), 455–459. https://doi.org/10.35940/ijeat.d6765.049420

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free