MIHNet: Combining N-gram, Sequential and Global Information for Text Classification

4Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Pre-Training language model has achieved amazing results in many NLP tasks. In Particular, BERT (Bidirectional Encoder Representations from Transformers) create a new era in NLP tasks. Despite the success, these model perform well at Global Information but weak on n-gram and Sequential information. In this paper, we conduct exhaustive experiments of classical text classification models upon BERT in text classification task and provide a general guide for BERT+ models. Finally, we propose a new text classification model called MIHNet (Multi-dimension Information Integration using Highway network), which integrates Global, n-gram and Sequential information together and get a better performance. Notably, our model obtains new state-of-The-Art results on eight widely-studied text classification datasets.

Cite

CITATION STYLE

APA

Song, Y. (2020). MIHNet: Combining N-gram, Sequential and Global Information for Text Classification. In Journal of Physics: Conference Series (Vol. 1453). Institute of Physics Publishing. https://doi.org/10.1088/1742-6596/1453/1/012156

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free