All-in Text: Learning document, label, andword representations jointly

50Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Conventional multi-label classification algorithms treat the target labels of the classification task as mere symbols that are void of an inherent semantics. However, in many cases textual descriptions of these labels are available or can be easily constructed from public document sources such as Wikipedia. In this paper, we investigate an approach for embedding documents and labels into a joint space while sharing word representations between documents and labels. For finding such embeddings, we rely on the text of documents as well as descriptions for the labels. The use of such label descriptions not only lets us expect an increased performance on conventional multi-label text classification tasks, but can also be used to make predictions for labels that have not been seen during the training phase. The potential of our method is demonstrated on the multi-label classification task of assigning keywords from the Medical Subject Headings (MeSH) to publications in biomedical research, both in a conventional and in a zero-shot learning setting.

Cite

CITATION STYLE

APA

Nam, J., Menćia, E. L., & Furnkranz, J. (2016). All-in Text: Learning document, label, andword representations jointly. In 30th AAAI Conference on Artificial Intelligence, AAAI 2016 (pp. 1948–1954). AAAI press. https://doi.org/10.1609/aaai.v30i1.10241

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free