Self-supervised domain adaptation for computer vision tasks

111Citations
Citations of this article
173Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Recent progress of self-supervised visual representation learning has achieved remarkable success on many challenging computer vision benchmarks. However, whether these techniques can be used for domain adaptation has not been explored. In this work, we propose a generic method for self-supervised domain adaptation, using object recognition and semantic segmentation of urban scenes as use cases. Focusing on simple pretext/auxiliary tasks (e.g. image rotation prediction), we assess different learning strategies to improve domain adaptation effectiveness by self-supervision. Additionally, we propose two complementary strategies to further boost the domain adaptation accuracy on semantic segmentation within our method, consisting of prediction layer alignment and batch normalization calibration. The experimental results show adaptation levels comparable to most studied domain adaptation methods, thus, bringing self-supervision as a new alternative for reaching domain adaptation. The code is available at this link.11https://github.com/Jiaolong/self-supervised-da

Cite

CITATION STYLE

APA

Xu, J., Xiao, L., & Lopez, A. M. (2019). Self-supervised domain adaptation for computer vision tasks. IEEE Access, 7, 156694–156706. https://doi.org/10.1109/ACCESS.2019.2949697

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free