Large scale text classification with efficient word embedding

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This article offers an empirical exploration on the efficient use of word-level convolutional neural networks (word-CNN) for large-scale text classification. Generally, the word-CNNs are difficult to train on large-scale datasets as the size of word embedding dramatically increases as the size of vocabulary increases. In order to handle this issue, this paper presents a de-noise approach to word embedding. We compare our model with several recently proposed CNN models on publicly available dataset. The experimental results show that proposed method improves the usefulness of word-CNN and increases the accuracy of text classification.

Cite

CITATION STYLE

APA

Ma, X., Jin, R., Paik, J. Y., & Chung, T. S. (2018). Large scale text classification with efficient word embedding. In Lecture Notes in Electrical Engineering (Vol. 425, pp. 465–469). Springer Verlag. https://doi.org/10.1007/978-981-10-5281-1_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free