Hierarchical Self-Attention Hybrid Sparse Networks for Document Classification

7Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Document classification is a fundamental problem in natural language processing. Deep learning has demonstrated great success in this task. However, most existing models do not involve the sentence structure as a text semantic feature in the architecture and pay less attention to the contexting importance of words and sentences. In this paper, we present a new model based on a sparse recurrent neural network and self-attention mechanism for document classification. Subsequently, we analyze three variant models of GRU and LSTM for evaluating the sparse model in different datasets. Extensive experiments demonstrate that our model obtains competitive performance and outperforms previous models.

Cite

CITATION STYLE

APA

Huang, W., Tao, Z., Huang, X., Xiong, L., & Yu, J. (2021). Hierarchical Self-Attention Hybrid Sparse Networks for Document Classification. Mathematical Problems in Engineering, 2021. https://doi.org/10.1155/2021/5594895

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free