DAST: Unsupervised Domain Adaptation in Semantic Segmentation Based on Discriminator Attention and Self-Training

71Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

Abstract

Unsupervised domain adaption has recently been used to reduce the domain shift, which would ultimately improve the performance of semantic segmentation on unlabeled real-world data. In this paper, we follow the trend to propose a novel method to reduce the domain shift using strategies of discriminator attention and self-training. The discriminator attention strategy contains a two-stage adversarial learning process, which explicitly distinguishes the well-aligned (domain-invariant) and poorly-aligned (domain-specific) features, and then guides the model to focus on the latter. The self-training strategy adaptively improves the decision boundary of the model for target domain, which implicitly facilitates the extraction of domain-invariant features. By combining the two strategies, we find a more effective way to reduce the domain shift. Extensive experiments demonstrate the effectiveness of our proposed method on numerous benchmark datasets.

Cite

CITATION STYLE

APA

Yu, F., Zhang, M., Dong, H., Hu, S., Dong, B., & Zhang, L. (2021). DAST: Unsupervised Domain Adaptation in Semantic Segmentation Based on Discriminator Attention and Self-Training. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 12B, pp. 10754–10762). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i12.17285

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free