Aligning ASL for statistical translation using a discriminative word model

  • Farhadi A
  • Forsyth D
  • 26


    Mendeley users who have this article in their library.
  • 15


    Citations of this article.


We describe a method to align ASL video subtitles with a closed-caption transcript. Our alignments are partial, based on spotting words within the video sequence, which consists of joined (rather than isolated) signs with unknown word boundaries. We start with windows known to contain an example of a word, but not limited to it. We estimate the start and end of the word in these examples using a voting method. This provides a small number of training examples (typically three per word). Since there is no shared structure, we use a discriminative rather than a generative word model. While our word spotters are not perfect, they are sufficient to establish an alignment. We demonstrate that quite small numbers of good word spotters results in an alignment good enough to produce simple English-ASL translations, both by phrase matching and using word substitution.

Author-supplied keywords

  • Action analysis and recognition
  • Applications of vision
  • Image and video retrieval
  • Object recognition

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Get full text


  • Ali Farhadi

  • David Forsyth

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free