A framework for background detection in video

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper presents a framework for background detection in video. First, key frames are extracted to capture background change in video and reduce the magnitude of the data. Then we analyze the content of the key frames to determine whether there is an interesting background in them. A timeconstrained clustering algorithm is exploited for key frame extraction. Background detection in the key frame is done with color and texture cues. The illumination varies much in natural scenes. To deal with the varying illumination, color is modeled with three sub-models: strong light, normal light and weak light. The connectivity of background pixels is used to reduce the computing cost of texture. Experimental results show that background can be detected simply and efficiently under the framework.

Cite

CITATION STYLE

APA

Qing, L., Wang, W., Huang, T., & Gao, W. (2002). A framework for background detection in video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2532, pp. 799–805). Springer Verlag. https://doi.org/10.1007/3-540-36228-2_99

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free