When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP

5Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-task learning (MTL) aims at achieving a better model by leveraging data and knowledge from multiple tasks. However, MTL does not always work - sometimes negative transfer occurs between tasks, especially when aggregating loosely related skills, leaving it an open question when MTL works. Previous studies show that MTL performance can be improved by algorithmic tricks. However, what tasks and skills should be included is less well explored. In this work, we conduct a case study in Financial NLP where multiple datasets exist for skills relevant to the domain, such as numeric reasoning and sentiment analysis. Due to the task difficulty and data scarcity in the Financial NLP domain, we explore when aggregating such diverse skills from multiple datasets with MTL can work. Our findings suggest that the key to MTL success lies in skill diversity, relatedness between tasks, and choice of aggregation size and shared capacity. Specifically, MTL works well when tasks are diverse but related, and when the size of the task aggregation and the shared capacity of the model are balanced to avoid overwhelming certain tasks.

Cite

CITATION STYLE

APA

Ni, J., Jin, Z., Wang, Q., Sachan, M., & Leippold, M. (2023). When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 7465–7488). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.412

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free