Motion segmentation based on model selection in permutation space for RGB sensors

5Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Motion segmentation is aimed at segmenting the feature point trajectories belonging to independently moving objects. Using the affine camera model, the motion segmentation problem can be viewed as a subspace clustering problem—clustering the data points drawn from a union of low-dimensional subspaces. In this paper, we propose a solution for motion segmentation that uses a multi-model fitting technique. We propose a data grouping method and a model selection strategy for obtaining more distinguishable data point permutation preferences, which significantly improves the clustering. We perform extensive testing on the Hopkins 155 dataset, and two real-world datasets. The experimental results illustrate that the proposed method can deal with incomplete trajectories and the perspective effect, comparing favorably with the current state of the art.

Cite

CITATION STYLE

APA

Zhao, X., Qin, Q., & Luo, B. (2019). Motion segmentation based on model selection in permutation space for RGB sensors. Sensors (Switzerland), 19(13). https://doi.org/10.3390/s19132936

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free