A multi-layer model for sign language's non-manual gestures generation

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Contrary to the popular believes, the structure of signs exceeds the simple combination of hands movements and shapes. Furthermore, sign significance resides, not in the hand shape, the position, the movement, the orientation or facial expression but in the combination of all five. In this context, our aim is to propose a model for non-manual gesture generation for sign language machine translation. We developed in previous works a new gesture generator that does not support facial animation. We propose a multi-layer model to be used for the development of new software for generating non-manual gestures NMG. Three layers compose the system. The first layer represents the interface between the system and external programs. Its role is to do the linguistic treatment in order to compute all linguistic information, such as the grammatical structure of the sentence. The second layer contains two modules (the manual gesture generator and the non-manual gesture generator). In first module the non-manual gestures generator uses three dimension facial modeling and animation techniques to produce facial expression in sign language. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

El Ghoul, O., & Jemni, M. (2014). A multi-layer model for sign language’s non-manual gestures generation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8548 LNCS, pp. 466–473). Springer Verlag. https://doi.org/10.1007/978-3-319-08599-9_70

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free