Class-specified segmentation with multi-scale superpixels

3Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a class-specified segmentation method, which can not only segment foreground objects from background at pixel level, but also parse images. Such class-specified segmentation is very helpful to many other computer vision tasks including computational photography. The novelty of our method is that we use multi-scale superpixels to effectively extract object-level regions instead of using only single scale superpixels. The contextual information across scales and the spatial coherency of neighboring superpixels in the same scale are represented and integrated via a Conditional Random Field model on multi-scale superpixels. Compared with the other methods that have ever used multi-scale superpixel extraction together with across-scale contextual information modeling, our method not only has fewer free parameters but also is simpler and effective. The superiority of our method, compared with related approaches, is demonstrated on the two widely used datasets of Graz02 and MSRC. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Liu, H., Qu, Y., Wu, Y., & Wang, H. (2013). Class-specified segmentation with multi-scale superpixels. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7728 LNCS, pp. 158–169). https://doi.org/10.1007/978-3-642-37410-4_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free