X-Net: A Binocular Summation Network for Foreground Segmentation

13Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In foreground segmentation, it is challenging to construct an effective background model to learn the spatial-temporal representation of the background. Recently, deep learning-based background models (DBMs) with the capability of extracting high-level features have shown remarkable performance. However, the existing state-of-the-art DBMs deal with video segmentation as single-image segmentation and ignore temporal cues in video sequences. To exploit temporal data sufficiently, this paper proposes a multi-input multi-output (MIMO) DBM framework for the first time, which is partially inspired by the binocular summation effect in human eyes. Our framework is an X-shaped network which allows the DBM to track temporal changes in a video sequence. Moreover, each output branch of our model could receive visual signals from two similar input frames simultaneously like the binocular summation mechanism. In addition, our model can be trained end-to-end using only a few training examples without any post-processing. We evaluate our method on the largest dataset for change detection (CDnet 2014) and achieve the state-of-the-art performance by an average overall F-Measure of 0.9920.

Cite

CITATION STYLE

APA

Zhang, J., Li, Y., Chen, F., Pan, Z., Zhou, X., Li, Y., & Jiao, S. (2019). X-Net: A Binocular Summation Network for Foreground Segmentation. IEEE Access, 7, 71412–71422. https://doi.org/10.1109/ACCESS.2019.2919802

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free