We present an approach for motion segmentation using independently detected keypoints instead of commonly used tracklets or trajectories. This allows us to establish correspondences over non- consecutive frames, thus we are able to handle multiple object occlusions consistently. On a frame-to-frame level, we extend the classical split-and-merge algorithm for fast and precise motion segmentation. Globally, we cluster multiple of these segmentations of different time scales with an accurate estimation of the number of motions. On the standard benchmarks, our approach performs best in comparison to all algorithms which are able to handle unconstrained missing data. We further show that it works on benchmark data with more than 98% of the input data missing. Finally, the performance is evaluated on a mobile-phone-recorded sequence with multiple objects occluded at the same time. © 2012 Springer-Verlag.
CITATION STYLE
Dragon, R., Rosenhahn, B., & Ostermann, J. (2012). Multi-scale clustering of frame-to-frame correspondences for motion segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7573 LNCS, pp. 445–458). https://doi.org/10.1007/978-3-642-33709-3_32
Mendeley helps you to discover research relevant for your work.