Rethinking Alignment and Uniformity in Unsupervised Image Semantic Segmentation

16Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Unsupervised image semantic segmentation (UISS) aims to match low-level visual features with semantic-level representations without outer supervision. In this paper, we address the critical properties from the view of feature alignments and feature uniformity for UISS models. We also make a comparison between UISS and image-wise representation learning. Based on the analysis, we argue that the existing MI-based methods in UISS suffer from representation collapse. By this, we proposed a robust network called Semantic Attention Network (SAN), in which a new module Semantic Attention (SEAT) is proposed to generate pixel-wise and semantic features dynamically. Experimental results on multiple semantic segmentation benchmarks show that our unsupervised segmentation framework specializes in catching semantic representations, which outperforms all the unpretrained and even several pretrained methods.

Cite

CITATION STYLE

APA

Zhang, D., Li, C., Li, H., Huang, W., Huang, L., & Zhang, J. (2023). Rethinking Alignment and Uniformity in Unsupervised Image Semantic Segmentation. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 (Vol. 37, pp. 11192–11200). AAAI Press. https://doi.org/10.1609/aaai.v37i9.26325

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free