Online moving camera background subtraction

79Citations
Citations of this article
99Readers
Mendeley users who have this article in their library.

Abstract

Recently several methods for background subtraction from moving camera were proposed. They use bottom up cues to segment video frames into foreground and background regions. Due to this lack of explicit models, they can easily fail to detect a foreground object when such cues are ambiguous in certain parts of the video. This becomes even more challenging when videos need to be processed online. We present a method which enables learning of pixel based models for foreground and background regions and, in addition, segments each frame in an online framework. The method uses long term trajectories along with a Bayesian filtering framework to estimate motion and appearance models. We compare our method to previous approaches and show results on challenging video sequences. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Elqursh, A., & Elgammal, A. (2012). Online moving camera background subtraction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7577 LNCS, pp. 228–241). https://doi.org/10.1007/978-3-642-33783-3_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free