Sign language recognition using sub-units

142Citations
Citations of this article
138Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper discusses sign language recognition using linguistic sub-units. It presents three types of sub-units for consideration; those learnt from appearance data as well as those inferred from both 2D or 3D tracking data. These sub-units are then combined using a sign level classifier; here, two options are presented. The first uses Markov Models to encode the temporal changes between sub-units. The second makes use of Sequential Pattern Boosting to apply discriminative feature selection at the same time as encoding temporal information. This approach is more robust to noise and performs well in signer independent tests, improving results from the 54% achieved by the Markov Chains to 76%. © 2012 Helen Cooper, Nicolas Pugeault, Eng-Jon Ong and Richard Bowden.

Cite

CITATION STYLE

APA

Cooper, H., Ong, E. J., Pugeault, N., & Bowden, R. (2012). Sign language recognition using sub-units. Journal of Machine Learning Research, 13, 2205–2231. https://doi.org/10.1007/978-3-319-57021-1_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free