Sign Language Detection and Conversion to Text Using CNN and OpenCV

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It becomes very difficult to converse with deaf and mute people, so we need a language structure to understand what they are trying to say. Language is a barrier in between normal people and mute people. Our aim is to create an interface that convert sign language to the text by which everyone can understand the sign language of the peoples who are unable to speak. For this purpose, we will train a model to recognize hand gestures. The model can be trained using KNN, support vector machine, Logistic regression and CNN. But CNN is more accurate and effective in case of image recognition, that's why we have train our model using CNN algorithm. After training the model on the American sign language dataset we can predict the text with good accuracy, but the model fails when we use OpenCV to recognize hand gestures. To further improve the model, we have created our own dataset to train the model using the dataset of images captured and then predict the text by taking the input from webcam. Also integrate the inputs to black and white after background subtraction. This will recognize hand gestures more accurately.

Cite

CITATION STYLE

APA

Kumar, H., Sharma, M. K., Rohit, Bisht, K. S., Kumar, A., Jain, R., … Singh, P. (2022). Sign Language Detection and Conversion to Text Using CNN and OpenCV. In AIP Conference Proceedings (Vol. 2555). American Institute of Physics Inc. https://doi.org/10.1063/5.0108711

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free