Cross-domain Ensemble Distillation for Domain Generalization

11Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Domain generalization is the task of learning models that generalize to unseen target domains. We propose a simple yet effective method for domain generalization, named cross-domain ensemble distillation (XDED), that learns domain-invariant features while encouraging the model to converge to flat minima, which recently turned out to be a sufficient condition for domain generalization. To this end, our method generates an ensemble of the output logits from training data with the same label but from different domains and then penalizes each output for the mismatch with the ensemble. Also, we present a de-stylization technique that standardizes features to encourage the model to produce style-consistent predictions even in an arbitrary target domain. Our method greatly improves generalization capability in public benchmarks for cross-domain image classification, cross-dataset person re-ID, and cross-dataset semantic segmentation. Moreover, we show that models learned by our method are robust against adversarial attacks and unseen corruptions.

Cite

CITATION STYLE

APA

Lee, K., Kim, S., & Kwak, S. (2022). Cross-domain Ensemble Distillation for Domain Generalization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13685 LNCS, pp. 1–20). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19806-9_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free