Online web-data-driven segmentation of selected moving objects in videos

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present an online Web-data-driven framework for segmenting moving objects in videos. This framework uses object shape priors learned in an online fashion from relevant labeled images ranked in a large-scale Web image set. The method for online prior learning has three steps: (1) relevant silhouette images for training are online selected using a user-provided bounding box and an object class annotation; (2) image patches containing the annotated object for testing are obtained via an online trained tracker; (3) a holistic shape energy term is learned for the object, while the object and background seed labels are propagated between frames. Finally, the segmentation is optimized via 3-D Graph cuts with the shape term and soft assignments of seeds. The system's performance is evaluated on the challenging Youtube dataset and found to be competitive with the state-of-the-art that requires offline modeling based on pre-selected templates and a pre-trained person detector. Comparison experiments have verified that tracking and seed label propagation both induce less distraction, while the shape prior induces more complete segments. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Xiang, X., Chang, H., & Luo, J. (2013). Online web-data-driven segmentation of selected moving objects in videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7725 LNCS, pp. 134–146). https://doi.org/10.1007/978-3-642-37444-9_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free