Bernice: A Multilingual Pre-trained Encoder for Twitter

47Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The language of Twitter differs significantly from that of other domains commonly included in large language model training. While tweets are typically multilingual and contain informal language, including emoji and hashtags, most pre-trained language models for Twitter are either monolingual, adapted from other domains rather than trained exclusively on Twitter, or are trained on a limited amount of in-domain Twitter data. We introduce Bernice, the first multilingual RoBERTa language model trained from scratch on 2.5 billion tweets with a custom tweet-focused tokenizer. We evaluate on a variety of monolingual and multilingual Twitter benchmarks, finding that our model consistently exceeds or matches the performance of a variety of models adapted to social media data as well as strong multilingual baselines, despite being trained on less data overall. We posit that it is more efficient compute- and data-wise to train completely on in-domain data with a specialized domain-specific tokenizer.

Cite

CITATION STYLE

APA

DeLucia, A., Wu, S., Mueller, A., Aguirre, C., Dredze, M., & Resnik, P. (2022). Bernice: A Multilingual Pre-trained Encoder for Twitter. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 6191–6205). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.415

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free