A video self-descriptor based on sparse trajectory clustering

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In order to describe the main movement of the video a new motion descriptor is proposed in this work. We combine two methods for estimating the motion between frames: block matching and brightness gradient of image. In this work we use a variable size block matching algorithm to extract displacement vectors as a motion information. The cross product between the block matching vector and the gradient is used to obtain the displacement vectors. These vectors are computed in a frame sequence, obtaining the block trajectory which contains the temporal information. The block matching vectors are also used to cluster the sparse trajectories according to their shape. The proposed method computes this information to obtain orientation tensors and to generate the final descriptor. The global tensor descriptor is evaluated by classification of KTH, UCF11 and Hollywood2 video datasets with a non-linear SVM classifier. Results indicate that our sparse trajectories method is competitive in comparison to the well known dense trajectories approach, using orientation tensors, besides requiring less computational effort.

Cite

CITATION STYLE

APA

de Oliveira Figueiredo, A. M., Caniato, M., Mota, V. F., Silva, R. L. de S., & Vieira, M. B. (2016). A video self-descriptor based on sparse trajectory clustering. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9787, pp. 571–583). Springer Verlag. https://doi.org/10.1007/978-3-319-42108-7_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free