Multimodal shape tracking with point distribution models

6Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses the problem of multimodal shape-based object tracking with learned spatio-temporal representations. Multi-modality is considered both in terms of shape representation and in terms of state propagation. Shape representation involves a set of distinct linear subspace models or Point Distribution Models (PDMs) which correspond to clusters of similar shapes. This representation is learned fully automatically from training data, without requiring prior feature correspondence. Multimodality at the state propagation level is achieved by particle filtering. The tracker uses a mixed-state: continuous parameters describe rigid transformations and shape variations within a PDM whereas a discrete parameter covers the PDM membership; discontinuous shape changes are modeled as transitions between discrete states of a Markov model. The observation density is derived from a well-behaved matching criterion involving multi-feature distance transforms. We illustrate our approach on pedestrian tracking from a moving vehicle. © Springer-Verlag Berlin Heidelberg 2002.

Cite

CITATION STYLE

APA

Giebel, J., & Gavrila, D. M. (2002). Multimodal shape tracking with point distribution models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2449 LNCS, pp. 1–8). Springer Verlag. https://doi.org/10.1007/3-540-45783-6_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free