Research on Long Text Classification Model Based on Multi‐Feature Weighted Fusion

3Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Text classification in the long‐text domain has become a development challenge due to the significant increase in text data, complexity enhancement, and feature extraction of long texts in various domains of the Internet. A long text classification model based on multi‐feature weighted fusion is proposed for the problems of contextual semantic relations, long‐distance global relations, and multi‐sense words in long text classification tasks. The BERT model is used to obtain feature representations containing global semantic and contextual feature information of text, convolutional neural networks to obtain features at different levels and combine attention mechanisms to obtain weighted local features, fuse global contextual features with weighted local features, and obtain classification results by equal‐length convolutional pooling. The experimental results show that the proposed model outperforms other models in terms of accuracy, precision, recall, F1 value, etc., under the same data set conditions compared with traditional deep learning classification models, and it can be seen that the model has more obvious advantages in long text classification.

Cite

CITATION STYLE

APA

Yue, X., Zhou, T., He, L., & Li, Y. (2022). Research on Long Text Classification Model Based on Multi‐Feature Weighted Fusion. Applied Sciences (Switzerland), 12(13). https://doi.org/10.3390/app12136556

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free