This paper describes the development of a 3D continuous sign language recognition system. Since many systems like WebSign[1], Vsigns[2] and eSign[3] are using Web3D standards to generate 3D signing avatars, 3D signed sentences are becoming common. Hidden Markov Models is the most used method to recognize sign language from video-based scenes, but in our case, since we are dealing with well formatted 3D scenes based on H-anim and X3D standards, Hidden Markov Models (HMM) is a too costly double stochastic process. We present a novel approach for sign language recognition using Longest Common Subsequence method. Our recognition experiments were based on a 500 signs lexicon and reach 99 % of accuracy. © 2010 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Jaballah, K., & Jemni, M. (2010). Toward automatic sign language recognition from Web3D based scenes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6180 LNCS, pp. 205–212). https://doi.org/10.1007/978-3-642-14100-3_31
Mendeley helps you to discover research relevant for your work.