Multi-task Active Learning for Pre-trained Transformer-based Models

15Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

Abstract

Multi-task learning, in which several tasks are jointly learned by a single model, allows NLP models to share information from multiple annotations and may facilitate better predictions when the tasks are inter-related. This technique, however, requires annotating the same text with multiple annotation schemes, which may be costly and laborious. Active learning (AL) has been demonstrated to op-timize annotation processes by iteratively se-lecting unlabeled examples whose annotation is most valuable for the NLP model. Yet, multi-task active learning (MT-AL) has not been applied to state-of-the-art pre-trained Transformer-based NLP models. This paper aims to close this gap. We explore various multi-task selection criteria in three realistic multi-task scenarios, reflecting different relations between the participating tasks, and demonstrate the effectiveness of multi-task compared to single-task selection. Our results suggest that MT-AL can be effectively used in order to minimize annotation efforts for multi-task NLP models.1.

Cite

CITATION STYLE

APA

Rotman, G., & Reichart, R. (2022). Multi-task Active Learning for Pre-trained Transformer-based Models. Transactions of the Association for Computational Linguistics, 10, 1209–1228. https://doi.org/10.1162/tacl_a_00515

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free