In this study, we consider a transfer-learning problem using the parameter transfer approach, in which a suitable parameter of feature mapping is learned through one task and applied to another objective task. We introduce the notion of local stability and parameter transfer learnability of parametric feature mapping, and derive an excess risk bound for parameter transfer algorithms. As an application of parameter transfer learning, we discuss the performance of sparse coding in self-taught learning. Although self-taught learning algorithms with a large volume of unlabeled data often show excellent empirical performance, their theoretical analysis has not yet been studied. In this paper, we also provide a theoretical excess risk bound for self-taught learning. In addition, we show that the results of numerical experiments agree with our theoretical analysis.
CITATION STYLE
Kumagai, W., & Kanamori, T. (2019). Risk bound of transfer learning using parametric feature mapping and its application to sparse coding. Machine Learning, 108(11), 1975–2008. https://doi.org/10.1007/s10994-019-05805-2
Mendeley helps you to discover research relevant for your work.