Toward automatic sign language recognition from Web3D based scenes

17Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes the development of a 3D continuous sign language recognition system. Since many systems like WebSign[1], Vsigns[2] and eSign[3] are using Web3D standards to generate 3D signing avatars, 3D signed sentences are becoming common. Hidden Markov Models is the most used method to recognize sign language from video-based scenes, but in our case, since we are dealing with well formatted 3D scenes based on H-anim and X3D standards, Hidden Markov Models (HMM) is a too costly double stochastic process. We present a novel approach for sign language recognition using Longest Common Subsequence method. Our recognition experiments were based on a 500 signs lexicon and reach 99 % of accuracy. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Jaballah, K., & Jemni, M. (2010). Toward automatic sign language recognition from Web3D based scenes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6180 LNCS, pp. 205–212). https://doi.org/10.1007/978-3-642-14100-3_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free