BanditMTL: Bandit-based multi-task learning for text classification

15Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Task variance regularization, which can be used to improve the generalization of Multitask Learning (MTL) models, remains unexplored in multi-task text classification. Accordingly, to fill this gap, this paper investigates how the task might be effectively regularized, and consequently proposes a multi-task learning method based on adversarial multi-armed bandit. The proposed method, named BanditMTL, regularizes the task variance by means of a mirror gradient ascent-descent algorithm. Adopting BanditMTL in the multitask text classification context is found to achieve state-of-the-art performance. The results of extensive experiments back up our theoretical analysis and validate the superiority of our proposals.

Cite

CITATION STYLE

APA

Mao, Y., Wang, Z., Liu, W., Lin, X., & Hu, W. (2021). BanditMTL: Bandit-based multi-task learning for text classification. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (Vol. 1, pp. 5506–5516). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.428

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free