Multi-model component-based tracking using robust information fusion

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the most difficult aspects of visual object tracking is the handling of occlusions and target appearance changes due to variations in illumination and viewing direction. To address these challenges we introduce a novel tracking technique that relies on component-based target representations and on robust fusion to integrate model information across frames. More specifically, we maintain a set of component-based models of the target, acquired at different time instances, and combine robustly the estimated motion suggested by each component to determine the next position of the target. In this paper we allow the target to undergo similarity transformations, although the framework is general enough to be applied to more complex ones. We pay particular attention to uncertainty handling and propagation, for component motion estimation, robust fusion across time and estimation of the similarity transform. The theory is tested on very difficult real tracking scenarios with promising results. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Georgescu, B., Comaniciu, D., Han, T. X., & Zhou, X. S. (2004). Multi-model component-based tracking using robust information fusion. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3247, 61–70. https://doi.org/10.1007/978-3-540-30212-4_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free