Robust feature descriptors for efficient vision-based tracking

6Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a robust implementation of an object tracker able to tolerate partial occlusions, rotation and scale for a variety of different objects. The objects are represented by collections of interest points which are described in a multi-resolution framework, giving a representation of those points at different scales. Inspired by [1], a stack of descriptors is built only the first time that the interest points are detected and extracted from the region of interest. This provides efficiency of representation and results in faster tracking due to the fact that it can be done off-line. An Unscented Kalman Filter (UKF) using a constant velocity model estimates the position and the scale of the object, with the uncertainty in the position and the scale obtained by the UKF, the search of the object can be constrained only in a specific region in both the image and in scale. The use of this approach shows an improvement in real-time tracking and in the ability to recover from full occlusions. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Carrera, G., Savage, J., & Mayol-Cuevas, W. (2007). Robust feature descriptors for efficient vision-based tracking. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4756 LNCS, pp. 251–260). https://doi.org/10.1007/978-3-540-76725-1_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free