Multi-Task Learning in Natural Language Processing: An Overview

120Citations
Citations of this article
118Readers
Mendeley users who have this article in their library.

Abstract

Deep learning approaches have achieved great success in the field of Natural Language Processing (NLP). However, directly training deep neural models often suffer from overfitting and data scarcity problems that are pervasive in NLP tasks. In recent years, Multi-Task Learning (MTL), which can leverage useful information of related tasks to achieve simultaneous performance improvement on these tasks, has been used to handle these problems. In this article, we give an overview of the use of MTL in NLP tasks. We first review MTL architectures used in NLP tasks and categorize them into four classes, including parallel architecture, hierarchical architecture, modular architecture, and generative adversarial architecture. Then we present optimization techniques on loss construction, gradient regularization, data sampling, and task scheduling to properly train a multi-task model. After presenting applications of MTL in a variety of NLP tasks, we introduce some benchmark datasets. Finally, we make a conclusion and discuss several possible research directions in this field.

Author supplied keywords

Cite

CITATION STYLE

APA

Chen, S., Zhang, Y., & Yang, Q. (2024). Multi-Task Learning in Natural Language Processing: An Overview. ACM Computing Surveys, 56(12). https://doi.org/10.1145/3663363

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free