Sparse learning for robust background subtraction of video sequences

5Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Sparse representation has been applied to background detecting by finding the best candidate with minimal reconstruction error using target templates. However most sparse representation based methods only consider the holistic representation and do not make full use of the sparse coefficients to discriminate between the foreground and the background. Learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. To take these challenges, this paper proposes a new method for robust background detecting via sparse representation. Our method explores both the strength of the well-patch adaptive dictionary learning technique to video frame structure analysis and the robustness background detection by the l1-norm data-fidelity term. By using linear sparse combinations of dictionary atom, the proposed method learns the sparse representations of video frame regions corresponding to candidate particles. The experiments show that the proposed method is able to tolerate the background clutter and video frame deterioration, and improves the existing detecting performance.

Cite

CITATION STYLE

APA

Luo, Y., & Zhang, H. (2015). Sparse learning for robust background subtraction of video sequences. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9225, pp. 400–411). Springer Verlag. https://doi.org/10.1007/978-3-319-22180-9_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free