Vision-Based American Sign Language Classification Approach via Deep Learning

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Hearing-impaired is the disability of partial or total hearing loss that causes a significant problem for communication with other people in society. American Sign Language (ASL) is one of the sign languages that most commonly used language used by Hearing impaired communities to communicate with each other. In this paper, we proposed a simple deep learning model that aims to classify the American Sign Language letters as a step in a path for removing communication barriers that are related to disabilities.

Cite

CITATION STYLE

APA

Elsayed, N., Elsayed, Z., & Maida, A. S. (2022). Vision-Based American Sign Language Classification Approach via Deep Learning. In Proceedings of the International Florida Artificial Intelligence Research Society Conference, FLAIRS (Vol. 35). Florida Online Journals, University of Florida. https://doi.org/10.32473/flairs.v35i.130616

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free