A Comparative Study of Pretrained Language Models on Thai Social Text Categorization

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The ever-growing volume of data of user-generated content on social media provides a nearly unlimited corpus of unlabeled data even in languages where resources are scarce. In this paper, we demonstrate that state-of-the-art results on two Thai social text categorization tasks can be realized by pretraining a language model on a large noisy Thai social media corpus of over 1.26 billion tokens and later fine-tuned on the downstream classification tasks. Due to the linguistically noisy and domain-specific nature of the content, our unique data preprocessing steps designed for Thai social media were utilized to ease the training comprehension of the model. We compared four modern language models: ULMFiT, ELMo with biLSTM, OpenAI GPT, and BERT. We systematically compared the models across different dimensions including speed of pretraining and fine-tuning, perplexity, downstream classification benchmarks, and performance in limited pretraining data.

Cite

CITATION STYLE

APA

Horsuwan, T., Kanwatchara, K., Vateekul, P., & Kijsirikul, B. (2020). A Comparative Study of Pretrained Language Models on Thai Social Text Categorization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12033 LNAI, pp. 63–75). Springer. https://doi.org/10.1007/978-3-030-41964-6_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free