Learning Transferable Distance Functions for Human Action Recognition

  • Yang W
  • Wang Y
  • Mori G
N/ACitations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning-based approaches for human action recognition often rely on large training sets. Most of these approaches do not perform well when only a few training samples are available. In this chapter,we consider the problem of human ac- tion recognition from a single clip per action. Each clip contains at most 25 frames. Using a patch based motion descriptor and matching scheme, we can achieve promising results on three different action datasetswith a single clip as the template. Our results are comparable to previously published results using much larger train- ing sets.We also present a method for learning a transferable distance function for these patches. The transferable distance function learning extracts generic knowl- edge of patch weighting from previous training sets, and can be applied to videos of new actions without further learning. Our experimental results show that the trans- ferable distance function learning not only improves the recognition accuracy of the single clip action recognition, but also significantly enhances the efficiency of the matching scheme.

Cite

CITATION STYLE

APA

Yang, W., Wang, Y., & Mori, G. (2011). Learning Transferable Distance Functions for Human Action Recognition (pp. 349–370). https://doi.org/10.1007/978-0-85729-057-1_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free