Temporal localization of actions with actoms

  • Gaidon A
  • Harchaoui Z
  • Schmid C
  • 74


    Mendeley users who have this article in their library.
  • 61


    Citations of this article.


We address the problem of localizing actions, such as opening a door, in hours of challenging video data. We propose a model based on a sequence of atomic action units, termed "actoms," that are semantically meaningful and characteristic for the action. Our actom sequence model (ASM) represents an action as a sequence of histograms of actom-anchored visual features, which can be seen as a temporally structured extension of the bag-of-features. Training requires the annotation of actoms for action examples. At test time, actoms are localized automatically based on a nonparametric model of the distribution of actoms, which also acts as a prior on an action's temporal structure. We present experimental results on two recent benchmarks for action localization "Coffee and Cigarettes" and the "DLSBP" dataset. We also adapt our approach to a classification-by-localization set-up and demonstrate its applicability on the challenging "Hollywood 2" dataset. We show that our ASM method outperforms the current state of the art in temporal action localization, as well as baselines that localize actions with a sliding window method.

Author-supplied keywords

  • Action recognition
  • actoms
  • temporal localization
  • video analysis

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document


  • Adrien Gaidon

  • Zaid Harchaoui

  • Cordelia Schmid

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free