Boosted subunits: A framework for recognising sign language from videos

20Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

This study addresses the problem of vision-based sign language recognition, which is to translate signs to English. The authors propose a fully automatic system that starts with breaking up signs into manageable subunits. A variety of spatiotemporal descriptors are extracted to form a feature vector for each subunit. Based on the obtained features, subunits are clustered to yield codebooks. A boosting algorithm is then applied to learn a subset of weak classifiers representing discriminative combinations of features and subunits, and to combine them into a strong classifier for each sign. A joint learning strategy is also adopted to share subunits across sign classes, which leads to a more efficient classification. Experimental results on real-world hand gesture videos demonstrate the proposed approach is promising to build an effective and scalable system. © The Institution of Engineering and Technology 2013.

Cite

CITATION STYLE

APA

Han, J., Awad, G., & Sutherland, A. (2013). Boosted subunits: A framework for recognising sign language from videos. IET Image Processing. https://doi.org/10.1049/iet-ipr.2012.0273

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free