Sign Language Recognition and Translation: A Multi-Modal Approach using Computer Vision and Natural Language Processing

3Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sign-to-Text (S2T) is a hand gesture recognition program in the American Sign Language (ASL) domain. The primary objective of S2T is to classify standard ASL alphabets and custom signs and convert the classifications into a stream of text using neural networks. This paper addresses the shortcomings of pure Computer Vision techniques and applies Natural Language Processing (NLP) as an additional layer of complexity to increase S2T's robustness.

Cite

CITATION STYLE

APA

Li, J., Gerdes, J., Gojit, J., Tao, A., Katke, S., Nguyen, K., & Ahmadnia, B. (2023). Sign Language Recognition and Translation: A Multi-Modal Approach using Computer Vision and Natural Language Processing. In International Conference Recent Advances in Natural Language Processing, RANLP (pp. 658–665). Incoma Ltd. https://doi.org/10.26615/978-954-452-092-2_071

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free