Multi-applicable text classification based on deep neural network

9Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most long text classification methods based on deep learning have problems such as semantics sparsity and long-distance dependence. To tackle these problems, a novel multi-applicable text classification based on deep neural network (MTDNN) is proposed, which 278 J. Yang et al. contains a bidirectional encoder representation from transformer (BERT), a dimension reduction layer, and the bidirectional long short-term memory (Bi-LSTM) combining attention mechanism. BERT is used to pre-train the words into the word embedding vectors. The dimension reduction layer extracts the feature phrase representations with higher weight from the word embedding vectors. The Bi-LSTM captures both the forward and backward context representations. An attention mechanism is employed to focus on the information outputted from the Bi-LSTM. The experimental results illustrate that the accuracy of the MTDNN for long text, short text classification, and sentiment analysis reaches 94.95%, 93.53% and 92.32%, respectively. The results show that our method outperforms the other state-of-the-art text classification methods.

Cite

CITATION STYLE

APA

Yang, J., Deng, F., Lv, S., Wang, R., Guo, Q., Kou, Z., & Chen, S. (2022). Multi-applicable text classification based on deep neural network. International Journal of Sensor Networks, 40(4), 277–286. https://doi.org/10.1504/IJSNET.2022.10049687

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free