Unsupervised feature learning for visual sign language identification

7Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.

Abstract

Prior research on language identification focused primarily on text and speech. In this paper, we focus on the visual modality and present a method for identifying sign languages solely from short video samples. The method is trained on unlabelled video data (unsupervised feature learning) and using these features, it is trained to discriminate between six sign languages (supervised learning). We ran experiments on short video samples involving 30 signers (about 6 hours in total). Using leave-one-signer-out cross-validation, our evaluation shows an average best accuracy of 84%. Given that sign languages are underresourced, unsupervised feature learning techniques are the right tools and our results indicate that this is realistic for sign language identification. © 2014 Association for Computational Linguistics.

Cite

CITATION STYLE

APA

Gebre, B. G., Crasborn, O., Wittenburg, P., Drude, S., & Heskes, T. (2014). Unsupervised feature learning for visual sign language identification. In 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference (Vol. 2, pp. 370–376). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/p14-2061

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free