Scale-Aware Feature Network for Weakly Supervised Semantic Segmentation

2Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Weakly supervised semantic segmentation with image-level labels is of great significance since it alleviates the dependency on dense annotations. However, as it relies on image classification networks that are only capable of producing sparse object localization maps, its performance is far behind that of fully supervised semantic segmentation models. Inspired by the successful use of multi-scale features for an improved performance in a wide range of visual tasks, we propose a Scale-Aware Feature Network (SAFN) for generating object localization maps. The proposed SAFN uses an attention module to learn the relative weights of multi-scale features in a modified fully convolutional network with dilated convolutions. This approach leads to efficient enlargements of the receptive fields of view and produces dense object localization maps. Our approach achieves mIoUs of 62.3% and 66.5% on the PASCAL VOC 2012 test set using VGG16 based and ResNet based segmentation models, respectively, outperforming other state-of-the-art methods for the weakly supervised semantic segmentation task.

Cite

CITATION STYLE

APA

Xu, L., Bennamoun, M., Boussaid, F., & Sohel, F. (2020). Scale-Aware Feature Network for Weakly Supervised Semantic Segmentation. IEEE Access, 8, 75957–75967. https://doi.org/10.1109/ACCESS.2020.2989331

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free