Character-level arabic text generation from sign language video using encoder–decoder model

3Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Video to text conversion is a vital activity in the field of computer vision. In recent years, deep learning algorithms have dominated automatic text generation in English, but there are a few research works available for other languages. In this paper, we propose a novel encoding–decoding system that generates character-level Arabic sentences from isolated RGB videos of Moroccan sign language. The video sequence was encoded by a spatiotemporal feature extraction using pose estimation models, while the label text of the video is transmitted to a sequence of representative vectors. Both the features and the label vector are joined and treated by a decoder layer to derive a final prediction. We trained the proposed system on an isolated Moroccan Sign Language dataset (MoSLD), composed of RGB videos from 125 MoSL signs. The experimental results reveal that the proposed model attains the best performance under several evaluation metrics.

Cite

CITATION STYLE

APA

Boukdir, A., Benaddy, M., Meslouhi, O. E., Kardouchi, M., & Akhloufi, M. (2023). Character-level arabic text generation from sign language video using encoder–decoder model. Displays, 76. https://doi.org/10.1016/j.displa.2022.102340

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free