Recognizing continuous grammatical marker facial gestures in sign language video

9Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In American Sign Language (ASL) the structure of signed sentences is conveyed by grammatical markers which are represented by facial feature movements and head motions. Without recovering grammatical markers, a sign language recognition system cannot fully reconstruct a signed sentence. However, this problem has been largely neglected in the literature. In this paper, we propose to use a 2-layer Conditional Random Field model for recognizing continuously signed grammatical markers in ASL. This recognition requires identifying both facial feature movements and head motions while dealing with uncertainty introduced by movement epenthesis and other effects. We used videos of the signers' faces, recorded while they signed simple sentences containing multiple grammatical markers. In our experiments, the proposed classifier yielded a precision rate of 93.76% and a recall rate of 85.54%. © 2011 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Nguyen, T. D., & Ranganath, S. (2011). Recognizing continuous grammatical marker facial gestures in sign language video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6495 LNCS, pp. 665–676). https://doi.org/10.1007/978-3-642-19282-1_53

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free