Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains

6Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Domain generalization (DG) is a difficult transfer learning problem aiming to learn a generalizable model for unseen domains. Recent foundation models (FMs) are robust to many distribution shifts and, therefore, should substantially improve the performance of DG. In this work, we study generic ways to adopt contrastive languageimage pre-training (CLIP), a visual-language foundation model, for DG problems in image classification. While empirical risk minimization (ERM) greatly improves the accuracy with bigger backbones and training datasets using standard DG benchmarks, fine-tuning FMs is not practical in many real-world situations. We propose Domain Prompt Learning (DPL) as a novel approach for domain inference in the form of conditional prompt generation. DPL achieved a significant accuracy improvement with only training a lightweight prompt generator (a three-layer MLP), whose parameter is of equivalent scale to the classification projector in the previous DG literature. Combining DPL with CLIP provides surprising performance, raising the accuracy of zero-shot CLIP from 73.7% to 79.3% on several standard datasets, namely PACS, VLCS, OfficeHome, and TerraIncognita. We hope the simplicity and success of our approach lead to broader adoption and analysis of foundation models in the domain generalization field.

Cite

CITATION STYLE

APA

Zhang, X., Gu, S. S., Matsuo, Y., & Iwasawa, Y. (2023). Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains. Transactions of the Japanese Society for Artificial Intelligence, 38(6). https://doi.org/10.1527/tjsai.38-6_B-MC2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free