Bridging the gap between sign language machine translation and sign language animation using sequence classification

20Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.

Abstract

To date, the non-manual components of signed utterances have rarely been considered in automatic sign language translation. However, these components are capable of carrying important linguistic information. This paper presents work that bridges the gap between the output of a sign language translation system and the input of a sign language animation system by incorporating non-manual information into the final output of the translation system. More precisely, the generation of non-manual information is scheduled after the machine translation step and treated as a sequence classification task. While sequence classification has been used to solve automatic spoken language processing tasks, we believe this to be the first work to apply it to the generation of non-manual information in sign languages. All of our experimental approaches outperformed lower baseline approaches, consisting of unigram or bigram models of non-manual features.

Cite

CITATION STYLE

APA

Ebling, S., & Huenerfauth, M. (2015). Bridging the gap between sign language machine translation and sign language animation using sequence classification. In SLPAT 2015 - 6th Workshop on Speech and Language Processing for Assistive Technologies, Proceedings (pp. 2–9). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w15-5102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free