Improving Gradient Trade-offs between Tasks in Multi-task Text Classification

10Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-task learning (MTL) has emerged as a promising approach for sharing inductive bias across multiple tasks to enable more efficient learning in text classification. However, training all tasks simultaneously often yields degraded performance of each task than learning them independently, since different tasks might conflict with each other. Existing MTL methods for alleviating this issue is to leverage heuristics or gradient-based algorithm to achieve an arbitrary Pareto optimal trade-off among different tasks. In this paper, we present a novel gradient trade-off approach to mitigate the task conflict problem, dubbed GetMTL, which can achieve a specific tradeoff among different tasks nearby the main objective of multi-task text classification (MTC), so as to improve the performance of each task simultaneously. The results of extensive experiments on two benchmark datasets back up our theoretical analysis and validate the superiority of our proposed GetMTL.

Cite

CITATION STYLE

APA

Chai, H., Cui, J., Wang, Y., Zhang, M., Fang, B., & Liao, Q. (2023). Improving Gradient Trade-offs between Tasks in Multi-task Text Classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 2565–2579). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.144

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free