Abstract
Multi-task learning (MTL) has become increasingly popular in natural language processing (NLP) because it improves the performance of related tasks by exploiting their commonalities and differences. Nevertheless, it is still not understood very well how multi-task learning can be implemented based on the relatedness of training tasks. In this survey, we review recent advances of multi-task learning methods in NLP, with the aim of summarizing them into two general multi-task training methods based on their task relatedness: (i) joint training and (ii) multi-step training. We present examples in various NLP downstream applications, summarize the task relationships and discuss future directions of this promising topic.
Cite
CITATION STYLE
Zhang, Z., Yu, W., Yu, M., Guo, Z., & Jiang, M. (2023). A Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 943–956). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.eacl-main.66
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.