Towards Agile Text Classifiers for Everyone

2Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Text-based safety classifiers are widely used for content moderation and increasingly to tune generative language model behavior-a topic of growing concern for the safety of digital assistants and chatbots. However, different policies require different classifiers, and safety policies themselves improve from iteration and adaptation. This paper introduces and evaluates methods for agile text classification, whereby classifiers are trained using small, targeted datasets that can be quickly developed for a particular policy. Experimenting with 7 datasets from three safety-related domains, comprising 15 annotation schemes, led to our key finding: prompt-tuning large language models, like PaLM 62B, with a labeled dataset of as few as 80 examples can achieve state-of-the-art performance. We argue that this enables a paradigm shift for text classification, especially for models supporting safer online discourse. Instead of collecting millions of examples to attempt to create universal safety classifiers over months or years, classifiers could be tuned using small datasets, created by individuals or small organizations, tailored for specific use cases, and iterated on and adapted in the time-span of a day.

Cite

CITATION STYLE

APA

Mozes, M., Hoffmann, J., Tomanek, K., Kouate, M., Thain, N., Yuan, A., … Dixon, L. (2023). Towards Agile Text Classifiers for Everyone. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 400–414). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free