Multi-task learning and algorithmic stability

16Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we study multi-task algorithms from the perspective of the algorithmic stability. We give a definition of the multi-task uniform stability, a generalization of the conventional uniform stability, which measures the maximum difference between the loss of a multi-task algorithm trained on a data set and that of the multitask algorithm trained on the same data set but with a data point removed in each task. In order to analyze multi-task algorithms based on multi-task uniform stability, we prove a generalized McDiarmid's inequality which assumes the difference bound condition holds by changing multiple input arguments instead of only one in the conventional McDiarmid's inequality. By using the generalized McDiarmid's inequality as a tool, we can analyze the generalization performance of general multitask algorithms in terms of the multi-task uniform stability. Moreover, as applications, we prove generalization bounds of several representative regularized multi-task algorithms.

Cite

CITATION STYLE

APA

Zhang, Y. (2015). Multi-task learning and algorithmic stability. In Proceedings of the National Conference on Artificial Intelligence (Vol. 4, pp. 3181–3187). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9558

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free