A vision-based method for recognizing non-manual information in Japanese sign language

10Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes a vision-based method for recognizing the nonmanual information in Japanese Sign Language (JSL). This new modality information provides grammatical constraints useful for JSL word segmentation and interpretation. Our attention is focused on head motion, the most dominant non-manual information in JSL. We designed an interactive color-modeling scheme for robust face detection. Two video cameras are vertically arranged to take the frontal and profile image of the JSL user, and head motions are classified into eleven patterns. Moment-based feature and statistical motion feature are adopted to represent these motion patterns. Classification of the motion features is performed with linear discrimant analysis method. Initial experimental results show that the method has good recognition rate and can be realized in real-time.

Cite

CITATION STYLE

APA

Xu, M., Raytchev, B., Sakaue, K., Hasegawa, O., Koizumi, A., Takeuchi, M., & Sagawa, H. (2000). A vision-based method for recognizing non-manual information in Japanese sign language. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1948, pp. 572–581). Springer Verlag. https://doi.org/10.1007/3-540-40063-x_75

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free