Dimensionality reduction of Fisher vectors for human action recognition

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automatic analysis of human behaviour in large collections of videos is rapidly gaining interest, even more so with the advent of file sharing sites such as YouTube. From one perspective, it can be observed that the size of feature vectors used for human action recognition from videos has been increasing enormously in the last five years, in the order of ∼100–500K. One possible reason might be the growing number of action classes/videos and hence the requirement of discriminating features (that usually end up to be higher-dimensional for larger databases). In this study, the authors review and investigate feature projection as a means to reduce the dimensions of the high-dimensional feature vectors and show their effectiveness in terms of performance. They hypothesise that dimensionality reduction techniques often unearth latent structures in the feature space and are effective in applications such as the fusion of high-dimensional features of different types; and action recognition in untrimmed videos. They conduct all the authors’ experiments using a Bag-of-Words framework for consistency and results are presented on large class benchmark databases such as the HMDB51 and UCF101 datasets.

Cite

CITATION STYLE

APA

Oruganti, V. R. M., & Goecke, R. (2016). Dimensionality reduction of Fisher vectors for human action recognition. IET Computer Vision, 10(5), 392–397. https://doi.org/10.1049/IET-CVI.2015.0091

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free