Continuous Sign Language Recognition and Its Translation into Intonation-Colored Speech

20Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

This article is devoted to solving the problem of converting sign language into a consistent text with intonation markup for subsequent voice synthesis of sign phrases by speech with intonation. The paper proposes an improved method of continuous recognition of sign language, the results of which are transmitted to a natural language processor based on analyzers of morphology, syntax, and semantics of the Kazakh language, including morphological inflection and the construction of an intonation model of simple sentences. This approach has significant practical and social significance, as it can lead to the development of technologies that will help people with disabilities to communicate and improve their quality of life. As a result of the cross-validation of the model, we obtained an average test accuracy of 0.97 and an average val_accuracy of 0.90 for model evaluation. We also identified 20 sentence structures of the Kazakh language with their intonational model.

Cite

CITATION STYLE

APA

Amangeldy, N., Ukenova, A., Bekmanova, G., Razakhova, B., Milosz, M., & Kudubayeva, S. (2023). Continuous Sign Language Recognition and Its Translation into Intonation-Colored Speech. Sensors, 23(14). https://doi.org/10.3390/s23146383

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free