Multitask Semi-Supervised Learning for Class-Imbalanced Discourse Classification

24Citations
Citations of this article
71Readers
Mendeley users who have this article in their library.

Abstract

As labeling schemas evolve over time, small differences can render datasets following older schemas unusable. This prevents researchers from building on top of previous annotation work and results in the existence, in discourse learning in particular, of many small class-imbalanced datasets. In this work, we show that a multitask learning approach can combine discourse datasets from similar and diverse domains to improve discourse classification. We show an improvement of 4.9% Micro F1-score over current state-of-the-art benchmarks on the NewsDiscourse dataset, one of the largest discourse datasets recently published, due in part to label correlations across tasks, which improve performance for underrepresented classes. We also offer an extensive review of additional techniques proposed to address resource-poor problems in NLP, and show that none of these approaches can improve classification accuracy in our setting.

Cite

CITATION STYLE

APA

Spangher, A., May, J., Shiang, S. R., & Deng, L. (2021). Multitask Semi-Supervised Learning for Class-Imbalanced Discourse Classification. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 498–517). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free