We suggest and implement WPS (Wearable Personal Station) and Web-based robust Language Processing Interface (LPI) integrating speech and sign language (the Korean Standard Sign Language; KSSL). In other word, the LPI is integration language recognition and processing system that can select suitable language recognition system according to noise degree in given noise environment, and it is extended into embedded and ubiquitous-oriented the next generation language processing system that can take the place of a traditional uni-modal language recognition system using only 1 sensory channel based on desk-top PC and wire communication net. In experiment results, while an average recognition rate of uni-modal recognizer using KSSL only is 92.58% and speech only is 93.28%, advanced LPI deduced an average recognition rate of 95.09% for 52 sentential recognition models. Also, average recognition time is 0.3 seconds in LPI. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Kim, J. H., & Hong, K. S. (2006). Speech and gesture recognition-based robust language processing interface in noise environment. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4224 LNCS, pp. 338–345). Springer Verlag. https://doi.org/10.1007/11875581_41
Mendeley helps you to discover research relevant for your work.