Semantic segmentation (i.e. image parsing) aims to annotate each image pixel with its corresponding semantic class label. Spatially consistent labeling of the image requires an accurate description and modeling of the local contextual information. Superpixel image parsing methods provide this consistency by carrying out labeling at the superpixel-level based on superpixel features and neighborhood information. In this paper, we develop generalized and flexible contextual models for superpixel neighborhoods in order to improve parsing accuracy. Instead of using a fixed segmentation and neighborhood definition, we explore various contextual models to combine complementary information available in alternative superpixel segmentations of the same image. Simulation results on two datasets demonstrate significant improvement in parsing accuracy over the baseline approach.
CITATION STYLE
Ates, H. F., & Sunetci, S. (2017). Improving semantic segmentation with generalized models of local context. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10425 LNCS, pp. 320–330). Springer Verlag. https://doi.org/10.1007/978-3-319-64698-5_27
Mendeley helps you to discover research relevant for your work.