Composite Feature Extraction and Selection for Text Classification

32Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Although words are basic semantic units in text, phrases, and expressions contain additional information, which is important for text classification. To capture this information, traditional algorithms extract composite features via word sequences or co-occurrences, such as bigrams and termsets, but ignore the influence of stop words and punctuation, which results in huge amounts of weak features. In this paper, we propose a text structure-based algorithm to extract composite features. Termsets that cross punctuation marks or stop words in the text are excluded. To eliminate redundancy, a novel discriminative measure containing two factors is suggested. One is employed to measure the relevancy, while the other is incorporated to increase the values of composite features, whose class frequencies are much smaller than those of their sub-features. The experiments on three benchmark datasets with both a support vector machine and a naive Bayes classifier illustrate the effectiveness of the approach.

Cite

CITATION STYLE

APA

Wan, C., Wang, Y., Liu, Y., Ji, J., & Feng, G. (2019). Composite Feature Extraction and Selection for Text Classification. IEEE Access, 7, 35208–35219. https://doi.org/10.1109/ACCESS.2019.2904602

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free