Synthesizing sign language by connecting linguistically structured descriptions to a multi-track animation system

17Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Animating sign language requires both a model of the structure of the target language and a computer animation system capable of producing the resulting avatar motion. On the language modelling side, AZee proposes a methodology and formal description mechanism to build grammars of Sign languages. It has mostly assumed the existence of an avatar capable of rendering its low-level articulation specifications. On the computer animation side, the Paula animator system proposes a multi-track SL generation platform designed for realism of movements, programmed from its birth to be driven by linguistic input. This paper presents a system architecture making use of the advantages of the two research efforts that have matured in recent years to the point where a connection is now possible. This paper describes the essence of both systems and describes the foundations of a connected system, resulting in a full process from abstract linguistic input straight to animated video. The main contribution is in addressing the trade-off between coarser natural-looking segments and composition of linguistically relevant atoms.

Cite

CITATION STYLE

APA

Filhol, M., McDonald, J., & Wolfe, R. (2017). Synthesizing sign language by connecting linguistically structured descriptions to a multi-track animation system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10278 LNCS, pp. 27–40). Springer Verlag. https://doi.org/10.1007/978-3-319-58703-5_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free