Real time prediction of american sign language using convolutional neural networks

3Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The American Sign Language (ASL) was developed in the early 19th century in the American School for Deaf, United States of America. It is a natural language inspired by the French sign language and is used by around half a million people around the world with a majority in North America. The Deaf Culture views deafness as a difference in human experience rather than a disability, and ASL plays an important role in this experience. In this project, we have used Convolutional Neural Networks to create a robust model that understands 29 ASL characters (26 alphabets and 3 special characters). We further host our model locally over a real-time video interface which provides the predictions in real-time and displays the corresponding English characters on the screen like subtitles. We look at the application as a one-way translator from ASL to English for the alphabet. We conceptualize this whole procedure in our paper and explore some useful applications that can be implemented.

Cite

CITATION STYLE

APA

Sinha, S., Singh, S., Rawat, S., & Chopra, A. (2019). Real time prediction of american sign language using convolutional neural networks. In Communications in Computer and Information Science (Vol. 1045, pp. 22–31). Springer Verlag. https://doi.org/10.1007/978-981-13-9939-8_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free