MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling

ArXiv: 2212.07284
5Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

Static subword tokenization algorithms have been an essential component of recent works on language modeling. However, their static nature results in important flaws that degrade the models' downstream performance and robustness. In this work, we propose MANTa, a Module for Adaptive Neural TokenizAtion. MANTa is a differentiable tokenizer trained end-to-end with the language model. The resulting system offers a trade-off between the expressiveness of byte-level models and the speed of models trained using subword tokenization. In addition, our tokenizer is highly explainable since it produces an explicit segmentation of sequences into blocks. We evaluate our pretrained model on several English datasets from different domains as well as on synthetic noise. We find that MANTa improves robustness to character perturbations and out-of-domain data. We then show that MANTa performs comparably to other models on the general-domain GLUE benchmark. Finally, we show that it is considerably faster than strictly byte-level models.

References Powered by Scopus

Neural machine translation of rare words with subword units

4457Citations
N/AReaders
Get full text

Subword regularization: Improving neural network translation models with multiple subword candidates

689Citations
N/AReaders
Get full text

Ex machina: Personal attacks seen at scale

524Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Tuning Large Language Model for Speech Recognition With Mixed-Scale Re-Tokenization

2Citations
N/AReaders
Get full text

Local Byte Fusion for Neural Machine Translation

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Godey, N., Castagné, R., de la Clergerie, É., & Sagot, B. (2022). MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 2859–2870). Association for Computational Linguistics (ACL).

Readers over time

‘22‘23‘24‘250481216

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 7

54%

Researcher 5

38%

Lecturer / Post doc 1

8%

Readers' Discipline

Tooltip

Computer Science 13

76%

Linguistics 2

12%

Medicine and Dentistry 1

6%

Neuroscience 1

6%

Save time finding and organizing research with Mendeley

Sign up for free
0