An analysis of generalization error in relevant subtask learning

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A recent variant of multi-task learning uses the other tasks to help in learning a task-of-interest, for which there is too little training data. The task can be classification, prediction, or density estimation. The problem is that only some of the data of the other tasks are relevant or representative for the task-of-interest. It has been experimentally demonstrated that a generative model works well in this relevant subtask learning task. In this paper we analyze the generalization error of the model, to show that it is smaller than in standard alternatives, and to point out connections to semi-supervised learning, multi-task learning, and active learning or covariate shift. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Yamazaki, K., & Kaski, S. (2009). An analysis of generalization error in relevant subtask learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5506 LNCS, pp. 629–637). https://doi.org/10.1007/978-3-642-02490-0_77

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free