Abstract
Byte-pair encoding (BPE) is a ubiquitous algorithm in the subword tokenization process of language models as it provides multiple benefits. However, this process is solely based on pre-training data statistics, making it hard for the tokenizer to handle infrequent spellings. On the other hand, though robust to misspellings, pure character-level models often lead to unreasonably long sequences and make it harder for the model to learn meaningful words. To alleviate these challenges, we propose a character-based subword module (char2subword)1 that learns the subword embedding table in pre-trained models like BERT. Our char2subword module builds representations from characters out of the subword vocabulary, and it can be used as a dropin replacement of the subword embedding table. The module is robust to character-level alterations such as misspellings, word inflection, casing, and punctuation. We integrate it further with BERT through pre-training while keeping BERT transformer parameters fixed- and thus, providing a practical method. Finally, we show that incorporating our module to mBERT significantly improves the performance on the social media linguistic codeswitching evaluation (LinCE) benchmark.
Cite
CITATION STYLE
Aguilar, G., McCann, B., Niu, T., Rajani, N., Keskar, N., & Solorio, T. (2021). Char2Subword: Extending the Subword Embedding Space Using Robust Character Compositionality. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 1640–1651). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.141
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.