A two-stage visual turkish sign language recognition system based on global and local features

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In order to provide communication between the deaf-dumb people and the hearing people, a two-stage system translating Turkish Sign Language into Turkish is developed by using vision based approach. Hidden Markov models are utilized to determine the global feature group in the dynamic gesture recognition stage, and k nearest neighbor algorithm is used to compare the local features in the static gesture recognition stage. The system can perform person dependent recognition of 172 isolated signs. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Haberdar, H., & Albayrak, S. (2006). A two-stage visual turkish sign language recognition system based on global and local features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4203 LNAI, pp. 29–37). Springer Verlag. https://doi.org/10.1007/11875604_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free