Comparing Prompt-Based and Standard Fine-Tuning for Urdu Text Classification

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Recent advancements in natural language processing have demonstrated the efficacy of pretrained language models for various downstream tasks through prompt-based fine-tuning. In contrast to standard fine-tuning, which relies solely on labeled examples, prompt-based fine-tuning combines a few labeled examples (few shot) with guidance through prompts tailored for the specific language and task. For low-resource languages, where labeled examples are limited, prompt-based fine-tuning appears to be a promising alternative. In this paper, we compare prompt-based and standard fine-tuning for the popular task of text classification in Urdu and Roman Urdu languages. We conduct experiments using five datasets, covering different domains, and pre-trained multilingual transformers. The results reveal that significant improvement of up to 13% in accuracy is achieved by prompt-based fine-tuning over standard fine-tuning approaches. This suggests the potential of prompt-based fine-tuning as a valuable approach for low-resource languages with limited labeled data.

Cite

CITATION STYLE

APA

Ullah, F., Azam, U., Faheem, A., Kamiran, F., & Karim, A. (2023). Comparing Prompt-Based and Standard Fine-Tuning for Urdu Text Classification. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 6747–6754). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.449

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free