Towards Automatic Gesture Stroke Detection

  • Gebre B
  • Wittenburg P
  • Lenkiewicz P
  • 14


    Mendeley users who have this article in their library.
  • 8


    Citations of this article.


Automatic annotation of gesture strokes is important for many gesture and sign language researchers. The unpredictable diversity of human gestures and video recording conditions require that we adopt a more adaptive case-by-case annotation model. In this paper, we present a work-in progress annotation model that allows a user to a) track hands/face b) extract features c) distinguish strokes from non-strokes. The hands/face tracking is done with color matching algorithms and is initialized by the user. The initialization process is supported with immediate visual feedback. Sliders are also provided to support a user-friendly adjustment of skin color ranges. After successful initialization, features related to positions, orientations and speeds of tracked hands/face are extracted using unique identifiable features (corners) from a window of frames and are used for training a learning algorithm. Our prelim- inary results for stroke detection under non-ideal video conditions are promising and showthe potential applicability of our methodology.

Author-supplied keywords

  • annotation
  • gesture
  • stroke

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


  • Binyam Gebrekidan Gebre

  • Peter Wittenburg

  • Przemyslaw Lenkiewicz

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free