Few-Shot Adaptation of Pre-Trained Networks for Domain Shift

4Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

Abstract

Deep networks are prone to performance degradation when there is a domain shift between the source (training) data and target (test) data. Recent test-time adaptation methods update batch normalization layers of pre-trained source models deployed in new target environments with streaming data to mitigate such performance degradation. Although such methods can adapt on-the-fly without first collecting a large target domain dataset, their performance is dependent on streaming conditions such as mini-batch size and class-distribution, which can be unpredictable in practice. In this work, we propose a framework for few-shot domain adaptation to address the practical challenges of data-efficient adaptation. Specifically, we propose a constrained optimization of feature normalization statistics in pre-trained source models supervised by a small support set from the target domain. Our method is easy to implement and improves source model performance with as few as one sample per class for classification tasks. Extensive experiments on 5 cross-domain classification and 4 semantic segmentation datasets show that our method achieves more accurate and reliable performance than test-time adaptation, while not being constrained by streaming conditions.

Cite

CITATION STYLE

APA

Zhang, W., Shen, L., Zhang, W., & Foo, C. S. (2022). Few-Shot Adaptation of Pre-Trained Networks for Domain Shift. In IJCAI International Joint Conference on Artificial Intelligence (pp. 1665–1671). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/232

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free