ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing

14Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.
Get full text

Abstract

English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based language models for natural language processing tasks. Although Vietnam has approximately 100M people speaking Vietnamese, several pre-trained models, e.g., PhoBERT, ViBERT, and vELECTRA, performed well on general Vietnamese NLP tasks, including POS tagging and named entity recognition. These pre-trained language models are still limited to Vietnamese social media tasks. In this paper, we present the first monolingual pre-trained language model for Vietnamese social media texts, ViSoBERT, which is pre-trained on a large-scale corpus of high-quality and diverse Vietnamese social media texts using XLM-R architecture. Moreover, we explored our pre-trained model on five important natural language downstream tasks on Vietnamese social media texts: emotion recognition, hate speech detection, sentiment analysis, spam reviews detection, and hate speech spans detection. Our experiments demonstrate that ViSoBERT, with far fewer parameters, surpasses the previous state-of-the-art models on multiple Vietnamese social media tasks. Our ViSoBERT model is available only for research purposes.

Cite

CITATION STYLE

APA

Nguyen, Q. N., Phan, T. C., Nguyen, D. V., & Van Nguyen, K. (2023). ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 5191–5207). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.315

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free