Variational pretraining for semi-supervised text classification

62Citations
Citations of this article
309Readers
Mendeley users who have this article in their library.

Abstract

We introduce VAMPIRE,1 a lightweight pretraining framework for effective text classification when data and computing resources are limited. We pretrain a unigram document model as a variational autoencoder on in-domain, unlabeled data and use its internal states as features in a downstream classifier. Empirically, we show the relative strength of VAMPIRE against computationally expensive contextual embeddings and other popular semi-supervised baselines under low resource settings. We also find that fine-tuning to in-domain data is crucial to achieving decent performance from contextual embeddings when working with limited supervision. We accompany this paper with code to pretrain and use VAMPIRE embeddings in downstream tasks.

Cite

CITATION STYLE

APA

Gururangan, S., Dang, T., Card, D., & Smith, N. A. (2020). Variational pretraining for semi-supervised text classification. In ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (pp. 5880–5894). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p19-1590

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free