Social media provide a rich source of data that can be mined and used for a wide variety of research purposes. However, annotating this data can be expensive, yet necessary for state-of-the-art pre-trained language models to achieve high prediction performance. Therefore, we combine pool-based active learning based on prediction uncertainty (an established method for reducing annotation costs) with unsupervised task adaptation through Masked Language Modeling (MLM). The results on three different datasets (two social media corpora, one benchmark dataset) show that task adaptation significantly improves results and that with only a fraction of the available training data, this approach reaches similar F1-scores as those achieved by an upper-bound baseline model fine-tuned on all training data. We hereby contribute to the scarce corpus of research on active learning with pre-trained language models and propose a cost-efficient annotation sampling and fine-tuning approach that can be applied to a wide variety of tasks and datasets.
CITATION STYLE
Lemmens, J., & Daelemans, W. (2023). Combining Active Learning and Task Adaptation with BERT for Cost-Effective Annotation of Social Media Datasets. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 237–250). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.wassa-1.22
Mendeley helps you to discover research relevant for your work.