Deep Learning and Sign Language Models Based Enhanced Accessibility of e-governance Services for Speech and Hearing-Impaired

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sign Language is the basic building block to communicate with the hearing and speech impaired. This can be made easy by developing a robust system to transcribe real-life spoken language sentences into its sign language sequence video and contra wise. Such a system is built with both a sign recognition unit and a sign translation unit. In this paper we provide an in-depth analysis of the existing proposed models to develop such a robust system, discussing their pros and cons. In addition to that, we evaluate the performance of those models based on the quality outcome from the video generation unit. We also brief the future scope in establishing real-life SLP communication models build with advanced deep learning architectures for the hearing and speech disabled thus paving the way to impart education and employment among the hearing and speech impaired.

Cite

CITATION STYLE

APA

Eunice, R. J., & Hemanth, D. J. (2022). Deep Learning and Sign Language Models Based Enhanced Accessibility of e-governance Services for Speech and Hearing-Impaired. In Communications in Computer and Information Science (Vol. 1666 CCIS, pp. 12–24). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-22950-3_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free