Multi-modal sign icon retrieval for augmentative communication

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses a multi-modal sign icon retrieval and prediction technology for generating sentences from ill-formed Taiwanese sign language (TSL) for people with speech or hearing impairments. The design and development of this PC-based TSL augmented and alternative communication (AAC) system aims to improve the input rate and accuracy of communication aids. This study focuses on 1) developing an effective TSL icon retrieval method, 2) investigating TSL prediction strategies for input rate enhancement, 3) using a predictive sentence template (PST) tree for sentence generation. The proposed system assists people with language disabilities in sentence formation. To evaluate the performance of our approach, a pilot study for clinical evaluation and education training was undertaken. The evaluation results show that the retrieval rate and subjective satisfactory level for sentence generation was significantly improved.

Cite

CITATION STYLE

APA

Wu, C. H., Chiu, Y. H., & Cheng, K. W. (2001). Multi-modal sign icon retrieval for augmentative communication. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2195, pp. 598–605). Springer Verlag. https://doi.org/10.1007/3-540-45453-5_77

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free