Self-supervised Sparse to Dense Motion Segmentation

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Observable motion in videos can give rise to the definition of objects moving with respect to the scene. The task of segmenting such moving objects is referred to as motion segmentation and is usually tackled either by aggregating motion information in long, sparse point trajectories, or by directly producing per frame dense segmentations relying on large amounts of training data. In this paper, we propose a self supervised method to learn the densification of sparse motion segmentations from single video frames. While previous approaches towards motion segmentation build upon pre-training on large surrogate datasets and use dense motion information as an essential cue for the pixelwise segmentation, our model does not require pre-training and operates at test time on single frames. It can be trained in a sequence specific way to produce high quality dense segmentations from sparse and noisy input. We evaluate our method on the well-known motion segmentation datasets FBMS 59 and DAVIS16.

Cite

CITATION STYLE

APA

Kardoost, A., Ho, K., Ochs, P., & Keuper, M. (2021). Self-supervised Sparse to Dense Motion Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12623 LNCS, pp. 421–437). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-69532-3_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free