Sign language translator

4Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

Abstract

Communication gap between people with hearing disabilities and normal people is a challenge to our society and is yet to be completely solved.In this paper, we present Sign Language Translator which is an end-to-end system aimed to solve the above problem. The system takes video input from the user and returns the translation of each sign in the English language back to the user. We train the system on American Sign Language(ASL) dataset having 29 classes and use Convolutional Neural Networks(CNN) as the central architecture. The system is divided into 3 parts which are the Video Stream Input System(VSIS), Hand Segmentation System(HSS) and the Sign Language Classification System(SLCS). We train the System to take video input using a web camera and process the video one frame at a time, which is then sent to HSS for detecting hands in the frame and finally to SLCS for classifying the gesture represented by the detected hand.

Cite

CITATION STYLE

APA

Mishra, D., Tyagi, M., Verma, A., & Dubey, G. (2020). Sign language translator. International Journal of Advanced Science and Technology, 29(5 Special Issue), 246–253. https://doi.org/10.26562/irjcs.2023.v1005.28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free