A hierarchical representation for future action prediction

177Citations
Citations of this article
150Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We consider inferring the future actions of people from a still image or a short video clip. Predicting future actions before they are actually executed is a critical ingredient for enabling us to effectively interact with other humans on a daily basis. However, challenges are two fold: First, we need to capture the subtle details inherent in human movements that may imply a future action; second, predictions usually should be carried out as quickly as possible in the social world, when limited prior observations are available. In this paper, we propose hierarchical movemes - a new representation to describe human movements at multiple levels of granularities, ranging from atomic movements (e.g. an open arm) to coarser movements that cover a larger temporal extent. We develop a max-margin learning framework for future action prediction, integrating a collection of moveme detectors in a hierarchical way. We validate our method on two publicly available datasets and show that it achieves very promising performance. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Lan, T., Chen, T. C., & Savarese, S. (2014). A hierarchical representation for future action prediction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8691 LNCS, pp. 689–704). Springer Verlag. https://doi.org/10.1007/978-3-319-10578-9_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free