A neural architecture for multi-label text classification

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel supervised approach for multi-label text classification, which is based on a neural network architecture consisting of a single encoder and multiple classifier heads. Our method predicts which subset of possible tags best matches an input text. It efficiently spends computational resources, exploiting dependencies between tags by encoding an input text into a compact representation which is then passed to multiple classifier heads. We test our architecture on a Twitter hashtag prediction task, comparing it to a baseline model with multiple feedforward networks and a baseline model with multiple recurrent neural networks with GRU cells. We show that our approach achieves a significantly better performance than baselines with an equivalent number of parameters.

Cite

CITATION STYLE

APA

Coope, S., Bachrach, Y., Žukov-Gregorič, A., Rodriguez, J., Maksak, B., McMurtie, C., & Bordbar, M. (2018). A neural architecture for multi-label text classification. In Advances in Intelligent Systems and Computing (Vol. 868, pp. 676–691). Springer Verlag. https://doi.org/10.1007/978-3-030-01054-6_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free