Importance weighted inductive transfer learning for regression

47Citations
Citations of this article
69Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We consider inductive transfer learning for dataset shift, a situation in which the distributions of two sampled, but closely related, datasets differ. When the target data to be predicted is scarce, one would like to improve its prediction by employing data from the other, secondary, dataset. Transfer learning tries to address this task by suitably compensating such a dataset shift. In this work we assume that the distributions of the covariates and the dependent variables can differ arbitrarily between the datasets. We propose two methods for regression based on importance weighting. Here to each instance of the secondary data a weight is assigned such that the data contributes positively to the prediction of the target data. Experiments show that our method yields good results on benchmark and real world datasets. © 2014 Springer-Verlag.

Cite

CITATION STYLE

APA

Garcke, J., & Vanck, T. (2014). Importance weighted inductive transfer learning for regression. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8724 LNAI, pp. 466–481). Springer Verlag. https://doi.org/10.1007/978-3-662-44848-9_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free