Tackling Background Distraction in Video Object Segmentation

9Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Semi-supervised video object segmentation (VOS) aims to densely track certain designated objects in videos. One of the main challenges in this task is the existence of background distractors that appear similar to the target objects. We propose three novel strategies to suppress such distractors: 1) a spatio-temporally diversified template construction scheme to obtain generalized properties of the target objects; 2) a learnable distance-scoring function to exclude spatially-distant distractors by exploiting the temporal consistency between two consecutive frames; 3) swap-and-attach augmentation to force each object to have unique features by providing training samples containing entangled objects. On all public benchmark datasets, our model achieves a comparable performance to contemporary state-of-the-art approaches, even with real-time performance. Qualitative results also demonstrate the superiority of our approach over existing methods. We believe our approach will be widely used for future VOS research. Code and models are available at https://github.com/suhwan-cho/TBD.

Cite

CITATION STYLE

APA

Cho, S., Lee, H., Lee, M., Park, C., Jang, S., Kim, M., & Lee, S. (2022). Tackling Background Distraction in Video Object Segmentation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13682 LNCS, pp. 446–462). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20047-2_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free