Efficient Online Multi-Task Learning via Adaptive Kernel Selection

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Conventional multi-task model restricts the task structure to be linearly related, which may not be suitable when data is linearly nonseparable. To remedy this issue, we propose a kernel algorithm for online multi-task classification, as the large approximation space provided by reproducing kernel Hilbert spaces often contains an accurate function. Specifically, it maintains a local-global Gaussian distribution over each task model that guides the direction and scale of parameter updates. Nonetheless, optimizing over this space is computationally expensive. Moreover, most multi-task learning methods require accessing to the entire training instances, which is luxury unavailable in the large-scale streaming learning scenario. To overcome this issue, we propose a randomized kernel sampling technique across multiple tasks. Instead of requiring all inputs' labels, the proposed algorithm determines whether to query a label or not via considering the confidence from the related tasks over label prediction. Theoretically, the algorithm trained on actively sampled labels can achieve a comparable result with one learned on all labels. Empirically, the proposed algorithm is able to achieve promising learning efficacy, while reducing the computational complexity and labeling cost simultaneously.

Cite

CITATION STYLE

APA

Yang, P., & Li, P. (2020). Efficient Online Multi-Task Learning via Adaptive Kernel Selection. In The Web Conference 2020 - Proceedings of the World Wide Web Conference, WWW 2020 (pp. 2465–2471). Association for Computing Machinery, Inc. https://doi.org/10.1145/3366423.3379993

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free