Solving Feature Sparseness in Text Classification using Core-Periphery Decomposition

3Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Feature sparseness is a problem common to cross-domain and short-text classification tasks. To overcome this feature sparseness problem, we propose a novel method based on graph decomposition to find candidate features for expanding feature vectors. Specifically, we first create a feature-relatedness graph, which is subsequently decomposed into core-periphery (CP) pairs and use the peripheries as the expansion candidates of the cores. We expand both training and test instances using the computed related features and use them to train a text classifier. We observe that prioritising features that are common to both training and test instances as cores during the CP decomposition to further improve the accuracy of text classification. We evaluate the proposed CP-decomposition-based feature expansion method on benchmark datasets for cross-domain sentiment classification and short-text classification. Our experimental results show that the proposed method consistently outperforms all baselines on short-text classification tasks, and perform competitively with pivot-based cross-domain sentiment classification methods.

Cite

CITATION STYLE

APA

Cui, X., Kojaku, S., Masuda, N., & Bollegala, D. (2018). Solving Feature Sparseness in Text Classification using Core-Periphery Decomposition. In NAACL HLT 2018 - Lexical and Computational Semantics, SEM 2018, Proceedings of the 7th Conference (pp. 255–264). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s18-2030

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free