Learning signs from subtitles: A weakly supervised approach to sign language recognition

60Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces a fully-automated, unsupervised method to recognise sign from subtitles. It does this by using data mining to align correspondences in sections of videos. Based on head and hand tracking, a novel temporally constrained adaptation of apriori mining is used to extract similar regions of video, with the aid of a proposed contextual negative selection method. These regions are refined in the temporal domain to isolate the occurrences of similar signs in each example. The system is shown to automatically identify and segment signs from standard news broadcasts containing a variety of topics. ©2009 IEEE.

Cite

CITATION STYLE

APA

Cooper, H., & Bowden, R. (2009). Learning signs from subtitles: A weakly supervised approach to sign language recognition. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009 (pp. 2568–2574). IEEE Computer Society. https://doi.org/10.1109/CVPRW.2009.5206647

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free