Robust person-independent visual sign language recognition

36Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sign language recognition constitutes a challenging field of research in computer vision. Common problems like overlap, ambiguities, and minimal pairs occur frequently and require robust algorithms for feature extraction and processing. We present a system that performs person-dependent recognition of 232 isolated signs with an accuracy of 99.3% in a controlled environment. Person-independent recognition rates reach 44.1% for 221 signs. An average performance of 87.8% is achieved for six signers in various uncontrolled indoor and outdoor environments, using a reduced vocabulary of 18 signs. The system uses a background model to remove static areas from the input video on pixel level. In the tracking stage, multiple hypotheses are pursued in parallel to handle ambiguities and facilitate retrospective correction of errors. A winner hypothesis is found by applying high level knowledge of the human body, hand motion, and the signing process. Overlaps are resolved by template matching, exploiting temporally adjacent frames with no or less overlap. The extracted features are normalized for person-independence and robustness, and classified by Hidden Markov Models. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Zieren, J., & Kraiss, K. F. (2005). Robust person-independent visual sign language recognition. In Lecture Notes in Computer Science (Vol. 3522, pp. 520–528). Springer Verlag. https://doi.org/10.1007/11492429_63

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free