Towards Explainable Deep Domain Adaptation

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In many practical applications data used for training a machine learning model and the deployment data does not always preserve the same distribution. Transfer learning and, in particular, domain adaptation allows to overcome this issue, by adapting the source model to a new target data distribution and therefore generalizing the knowledge from source to target domain. In this work, we present a method that makes the adaptation process more transparent by providing two complementary explanation mechanisms. The first mechanism explains how the source and target distributions are aligned in the latent space of the domain adaptation model. The second mechanism provides descriptive explanations on how the decision boundary changes in the adapted model with respect to the source model. Along with a description of a method, we also provide initial results obtained on publicly available, real-life dataset.

Cite

CITATION STYLE

APA

Bobek, S., Nowaczyk, S., Pashami, S., Taghiyarrenani, Z., & Nalepa, G. J. (2024). Towards Explainable Deep Domain Adaptation. In Communications in Computer and Information Science (Vol. 1947, pp. 101–113). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-50396-2_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free