Appearance-based recognition of words in American sign language

22Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present how appearance-based features can be used for the recognition of words in American sign language (ASL) from a video stream. The features are extracted without any segmentation or tracking of the hands or head of the signer, which avoids possible errors in the segmentation step. Experiments are performed on a database that consists of 10 words in ASL with 110 utterances in total. These data are extracted from a publicly available collection of videos and can therefore be used by other research groups. The video streams of two stationary cameras are used for classification, but we observe that one camera alone already leads to sufficient accuracy. Hidden Markov Models and the leaving one out method are employed for training and classification. Using the simple appearance-based features, we achieve an error rate of 7%. About half of the remaining errors are due to words that are visually different from all other utterances. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Zahedi, M., Keysers, D., & Ney, H. (2005). Appearance-based recognition of words in American sign language. In Lecture Notes in Computer Science (Vol. 3522, pp. 511–519). Springer Verlag. https://doi.org/10.1007/11492429_62

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free