Robust dynamic background model with adaptive region based on T2FS and GMM

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

For many tracking and surveillance applications, Gaussian mixture model (GMM) provides an effective mean to segment the foreground from background. Though, because of insufficient and noisy data in complex dynamic scenes, the estimated parameters of the GMM, which are based on the assumption that the pixel process meets multi-modal Gaussian distribution, may not accurately reflect the underlying distribution of the observations. And the existing block-based GMM (BGMM) method may be able to segment only rough foreground objects with timeconsuming calculations. To solve these difficulties, this paper proposes to use type-2 fuzzy sets (T2FSs) to handle GMM’s uncertain parameters (T2GMM). Furthermore, this paper also introduces a novel representation of contextual spatial information including the color, edge and texture features for each block which is faster and almost lossless (T2BGMM). Experimental results demonstrate the efficiency of the proposed methods.

Cite

CITATION STYLE

APA

Guo, Y., Ji, Y., Zhang, J., Gong, S., & Liu, C. (2015). Robust dynamic background model with adaptive region based on T2FS and GMM. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9403, pp. 764–770). Springer Verlag. https://doi.org/10.1007/978-3-319-25159-2_70

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free