This paper addresses the problem of multimodal shape-based object tracking with learned spatio-temporal representations. Multi-modality is considered both in terms of shape representation and in terms of state propagation. Shape representation involves a set of distinct linear subspace models or Point Distribution Models (PDMs) which correspond to clusters of similar shapes. This representation is learned fully automatically from training data, without requiring prior feature correspondence. Multimodality at the state propagation level is achieved by particle filtering. The tracker uses a mixed-state: continuous parameters describe rigid transformations and shape variations within a PDM whereas a discrete parameter covers the PDM membership; discontinuous shape changes are modeled as transitions between discrete states of a Markov model. The observation density is derived from a well-behaved matching criterion involving multi-feature distance transforms. We illustrate our approach on pedestrian tracking from a moving vehicle. © Springer-Verlag Berlin Heidelberg 2002.
CITATION STYLE
Giebel, J., & Gavrila, D. M. (2002). Multimodal shape tracking with point distribution models. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2449 LNCS, pp. 1–8). Springer Verlag. https://doi.org/10.1007/3-540-45783-6_1
Mendeley helps you to discover research relevant for your work.