Self-inhibition residual convolutional networks for Chinese sentence classification

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Convolutional network has become a dominant approach in many Natural Language Processing (NLP) tasks. However, these networks are pretty shallow and simple so they are not able to capture the hierarchical feature of text. In addition, text preprocessing of those models in Chinese are quite rough, which leads to the loss of rich semantic information. In this paper, we explore deep convolutional networks for Chinese sentence classification and present a new model named Self-Inhibition Residual Convolutional Network (SIRCNN). This model employs extra Chinese character information and replaces convolutional block with self-inhibiting residual convolutional block to improve performance of deep network. It is one of the few explorations which use deep convolutional network in various text classification tasks. Experiments show that our model can achieve state-of-the-art accuracy on three different datasets with a better convergence rate.

Cite

CITATION STYLE

APA

Xiong, M., Li, R., Li, Y., & Yang, Q. (2018). Self-inhibition residual convolutional networks for Chinese sentence classification. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11301 LNCS, pp. 425–436). Springer Verlag. https://doi.org/10.1007/978-3-030-04167-0_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free