3D motion consistency analysis for segmentation in 2D video projection

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Motion segmentation for 2D videos is usually based on tracked 2D point motions, obtained for a sequence of frames. However, the 3D real world motion consistency is easily lost in the process, due to projection from 3D space to the 2D image plane. Several approaches have been proposed in the literature to recover 3D motion consistency from 2D point motions. To further improve on this, we here propose a new criterion and associated technique, which can be used to determine whether a group of points show 2D motions consistent with joint 3D motion. It is also applicable for estimating the 3D motion information content. We demonstrate that the proposed criterion can be applied to improve segmentation results in two ways: finding the misclassified points in a group, and assigning unclassified points to the correct group. Experiments with synthetic data and different noise levels, and with real data taken from a benchmark, give insight in the performance of the algorithm under various conditions.

Cite

CITATION STYLE

APA

Zhao, W., Roos, N., & Peeters, R. (2017). 3D motion consistency analysis for segmentation in 2D video projection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10425 LNCS, pp. 440–452). Springer Verlag. https://doi.org/10.1007/978-3-319-64698-5_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free