A Million Tweets Are Worth a Few Points: Tuning Transformers for Customer Service Tasks

3Citations
Citations of this article
63Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In online domain-specific customer service applications, many companies struggle to deploy advanced NLP models successfully, due to the limited availability of and noise in their datasets. While prior research demonstrated the potential of migrating large open-domain pretrained models for domain-specific tasks, the appropriate (pre)training strategies have not yet been rigorously evaluated in such social media customer service settings, especially under multilingual conditions. We address this gap by (i) collecting a multilingual social media corpus containing customer service conversations (865k tweets), (ii) comparing various pipelines of pretraining and finetuning approaches, (iii) applying them on 5 different end tasks. We show that pretraining a generic multilingual transformer model on our in-domain dataset, before finetuning on specific end tasks, consistently boosts performance, especially in non-English settings.

Cite

CITATION STYLE

APA

Hadifar, A., Labat, S., Hoste, V., Develder, C., & Demeester, T. (2021). A Million Tweets Are Worth a Few Points: Tuning Transformers for Customer Service Tasks. In NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 220–225). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.naacl-main.21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free